If you are using AI for coding (which you are probably doing if you're reading this), I'm certain you've experienced that telling AI to "build an app for x" or "implement a solution for y" almost never works. At some point, it just always ends up in a mess and you have to do it yourself. In this article, I want to motivate a different view on using AI for coding (or really any other domain): the optimal symbiosis between the capabilities of an AI and the human mind. But before we dive into the weeds, I want to share a personal anecdote:
Currently, I am working on a project that involves a neuroscientific model of how the brain solves the problem of spatial navigation. Specifically, this model is called the Tolman-Eichenbaum Machine, or TEM for short. It's still a research-heavy concept, and not many people have implemented this. My thought was: I can tell AI to implement a baseline version of the model and then use it for my own needs. There exists a paper about it, and a single implementation using a somewhat outdated Python version and the wrong machine learning library. I thought that ought to be enough for the AI to implement the model, right? No. Not a single LLM I tried gave me an implementation that is even close to what's written in the paper. They just couldn't do it. So I set out to do it myself. Like always when the AI fails.
Then, during my work on the project, I tried using AI for implementing the training routine – a problem solved countless times, with endless training data available. The solution was superb, and most of the code worked right out of the box. A single prompt, with the existing code in the context, was necessary to produce production-ready code. This got me thinking: What if I am using AI in the wrong way? Apparently, it can solve some problems perfectly and others not at all. This is what we'll discuss in the following.
Let's break down the limitations and capabilities of current AI systems (by that, I mean LLMs like ChatGPT, Gemini or Claude Sonnet):
Now, with this settled, we can examine how we go about writing software. Typically, we start with some kind of problem. From that, we think about how it could be solved with code. In order to do that, some general architecture is needed. Most of the time, this architecture follows familiar patterns. Once we jump into implementing, we refine the architecture along the way and build the different parts of the software system. To summarize:
When you tell an LLM to "build x", you are trying to do all those steps in one go, which is bound to fail, because remember: AI can only solve what it has been trained on. If your problem is not some super generic one, you'll need a different strategy.
If we stick to the process outlined above, it's fair to say that steps 1, 2 and 4 are the most creative, and step 3 is the most mechanical. AI is very good at supporting creativity, as it provides vast knowledge right at your fingertips, but it's not creative itself. So, for steps 1, 2 and 4, use AI to help you get a proper problem statement, sketch an outline of the code and refine it later on. Don't switch off your brain and hope the AI will do it for you. This is exactly where human capabilities are needed. You have real-world experience, so you will be able to state the problem much more accurately than the AI.
Now, for the abstract proposition, I want to distinguish two cases: sketching (i) an entire software architecture and (ii) a specific implementation problem. For the former, I like to give the AI some context about the problem and then tell it to define the modules, without jumping into the specific implementation right away. Once that's complete, I can easily adjust the proposal to match my own vision. This is what I mean when I say that AI can help creativity; It will give you some generic template that it derived from all the data it received during training, but you still have to work it out yourself.
Once that step is complete, you can jump to the implementation. As I mentioned earlier, this is the most mechanical part of software engineering. In many cases, AI will be able to solve the specific implementation problems right away, but often enough it will fail. What I've found is that, similar to sketching the overall architecture, it is helpful to guide the AI in the right direction. I do this by writing the pseudocode of what I want the implementation to look like, and then telling the AI to complete it. In doing so, you remove all the creativity and uncertainty and let the AI do what it can do best. The problem you give the AI is thus reduced to simply taking your outline and translating it into code. This is highly efficient, as you don't have to dig up the documentation and sift through all the boring details. Depending on how much uncertainty you leave to the AI, it will write top-notch code. If the AI is not able to do it, you have to break the problem statement down even further, leading the AI to elementary problems it can easily solve. Afterward, you can synthesize the individual solutions into a coherent whole. You can see how this concept extends beyond software engineering.
What differentiates this approach from "vibe coding" is that you stay in control of the system. After all, writing software is still an engineering discipline, not art. The creativity of the human mind is needed, but in a structured way. There exist edge cases that you can only confirm if you understand the system as a whole. By purely relying on AI, you lose that holistic understanding. With the approach I've outlined in this article, you are able to replace the boring footwork and focus on creative problem-solving (which, personally, is the part about programming I enjoy most).
If I had to summarize this article in two sentences it would be: Discern the unknown component of finding a solution from the known component. The unknown component, you have to take care of yourself; the known component can be handled by AI.
Thank you for reading, and until next time.