- Perspectives on Power Platform
- Posts
- From low-code to vibe-code
From low-code to vibe-code
Why would makers and developers adopt low-code GUI tools now that AI can generate endless amounts of code - sometimes based on nothing but "vibes".

My whole professional career has been about the quest to see how much tech stuff one can do in a business context without writing code. This started way before the terms “no-code” and “low-code” were a thing. Working with customer data and marketing scenarios, I first did my tricks with graphical BI tools, then configurable business applications like Microsoft Dynamics CRM/XRM.
Five years ago, in 2020, the time felt right to go all-in with low-code and start leveraring this approach outside the Dynamics 365 world. Power Platform had become an actual thing to be developed and sold by Microsoft. Meanwhile, Office 365 customers were starting to realize that citizen devs were already building apps and flows with the capabilities included in their current licenses. So, I co-founded the first Power Platform consultancy in our market and replaced my CRM hat with my low-code hat. This thing just had to become huge, right?
Today, Power Platform is more popular than ever. Yet I wouldn’t choose the same “100% low-code” strategy if I were founding a company in 2025. Because it seems obvious to me that the pendulum has shifted from “less code” to “more code”, thanks to large language models. The next few years will likely see a massive increase in code-first solutions rather than GUI driven configuration.
I will reflect on the possible outcomes from this paradigm shift in a future newsletter issue. But first, let’s discuss how the emergence of AI-driven “vibe coding” has impacted earlier assumptions about of how citizen devs, pro devs and designers might embrace low-code tools.
Vibe coding with AI
ChatGPT and similar LLM based tools are great at coding because code is ultimately a language spoken by computers. Another inherent quality of LLMs, their hallucinations, is also much less dangerous in coding than in most other scenarios. This is because it’s easy to “fact check” the code output of AI: run the code, see if it works. Not something you could easily apply in various business processes where tech vendors like Microsoft and Salesforce are now encouraging customers to adopt AI agents. That’s why I believe the current wave of GenAI will impact solution building more than operating those solutions.
It used to be difficult for us non-programming human beings to instruct computers to do what we want them to. To make this possible, the abstraction layers that offer GUIs to translate our intents into computer language became a big business. Now, the arrival of LLMs challenges these assumptions. Suddenly, it’s possible for us humans to type things in our human language and get that translated into computer language. Everyone now has the option of doing it via GUI or through the “code” of written language. Understandably, people are drawn to this new possibility of doing things on their terms - rather than learning to operate yet another GUI.
As an example, when I was recently exploring a customer need that included having a custom UI on top of product data managed in Dataverse, I decided to not bother with a PowerPoint mock UI that I might have previously drawn. I didn’t even want to create a fake UI with a canvas app since it needed to look like something a business user would want to use. After all, canvas apps are often too much like clickable PowerPoint slides: functional but ugly.
Instead, I fired up ChatGPT with the o1 model that now supported running simple code in its Canvas. This meant I could specify my idea and requirements in the chat, then let the AI “reason” for a moment and produce an output that could be immediately previewed in the canvas. I didn’t have to learn any dedicated tools for producing mock UIs, which I’m sure there are several out there for scenarios like this. I just typed it into the same tool I use daily anyway - and it worked.