It is undeniable that the low-code market is quickly evolving. With the advent of AI and, in particular, vibe coding, many markets that could be the target of low-code approaches are now more interested in directly creating their applications by talking to an LLM. To the point that some claim low-code is dead and major commercial low-code companies have rebranded themselves to “Agentic enterprise” development and similar mottos.
We believe AI and low-code development can complement each other. The figure illustrating this post aims to show the different paths to combine them:
- Traditional low-code development. The top part, “low-code development” illustrates the current approach where users use a low-code tool, like BESSER, to model their systems and then choose a rule-based code-generator to create the modelled software on their target platform. The result is completely deterministic and the code can be of high quality as the generators could embed “best practices” in the template, resulting in code that is secure, unbiased, energy efficient,…
- Vibe modeling. Low-code platforms can embed agents to help people model, as we do in BESSER. This is what we call “vibe modeling” and allows to speed up the modeling process while giving the users the chance to verify and validate the models before generating the code with the same rule-based generator as before. This scenario combines the flexibility of AI and the determinism of rule-based code generation
- Full vibe-driven engineering (Vibe modeling + vibe coding). Some scenarios may require a more flexible approach where even the code is generated by AI to cover unforeseen situations (a target platform not covered by the generator, the need to generate code that goes out of the generator scope, addition of features not easy to model, e.g. styling visual aspects,…).
The key element of our proposal, and the one that makes us differernt from other full vibe coding approaches, is that we keep the models as the pillar of the approach. These models can be manually created or “vibed” but they are still explicit and can be reviewed and validated before vibe coding. Moreover, the models are part of the vibe coding input (a kind of “spec-driven development”, the term used by the vibe-coding community) to maximize the chances of getting an output code that satisfies the original user intentions. Moreover, models remain a useful documentation and communication tool in any development path. This is why we say that you can have a vibe-driven approach, but, still, that approach will be model-based.
We are working to support all these paths in BESSER, including the complete vibe-driven model-based experience. Internally, as explained before, from an initial user input in natural language describing the app they want to build, we would be first generating the models corresponding to the user request. Users could optionally open and validate these models in the “standard” low-code interface and reupload them. Or they could just accept them. One way or the other, this would trigger the generation of the full application code following the spec-driven path with the models as input. This would push the abstraction even further, making model-driven engineering accessible to users who have no knowledge of modeling concepts at all, and turning BESSER into a tool where a simple conversation is all it takes to go from an idea to a deployed application while keeping some “grounding” and reliability to the generated code thanks to:
- the models used as precise input,
- the use of skills to instruct the vibe coding agent to use deterministic code-generators available in BESSER when possible
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
Hi Jordi,
I cannot express the amount of my agreement on the importance of a model-in-the-midths, as you describe in this post. It is so important for communication, understanding, long term stability, trust, maintainability by human/machine etc.
Moreover, given all the things we learned to take care of in the past, which we require from every human participating in a project, and given the fact that naive vibe coding neglects them all, the intermediate model approach seems just the way to go.
I would add a possible third alternative concerning the second generation step. Given the inherent probabilistic nature of vibing, one could use the LLMs not for creating the target production code, but to create templates.
The advantages are: we can test these templates thoroughly and integrate them into the traditional generation step so that there is no probabilism involved anymore. Also, the next time we encounter similar requirements, we do not need to employ our own nuclear power plant again, but consume only maybe 0,001 kWh.
Best, Andreas
Yes, indeed, using the LLMs to create templates that can then be added to the “deterministic path” is a real possibility.