There is a quote I want to come back to, from Elisabeth Hendrickson:
“The word ‘context’ is shorthand for the cumulative effect of all the past decisions that we cannot change now.”
Lately I’ve been thinking about it from a completely different angle. Not in the retrospective sense of “how did we get here”, but in a prospective sense. What context do I need to provide, so that someone, or something, can make a good decision going forward?
That someone, in this case, is an AI.
Vision – Mission – Capacity – Learning
If you interact with a human colleague, the VMCL model describes the layers of context and information a human has, that you don’t need to include in your communication.
I mentioned in earlier posts the VMCL model from Drs. Cabrera. Vision, Mission, Capacity, Learning. On a larger scale this context is the vision. On a slightly smaller scale this is the mission, what is the next step towards reaching the vision. Capacity is the next smaller scale for the context, what do I need to do now to fulfill the mission. And Learning is the continuous feedback loop. To understand if we have the right capacity to fulfill the mission. If the mission is on track and if it’s still the right mission, and where are we in our vision?
What information do I need to provide for someone, or something, to fit my V, M and C?
When providing instructions to a human colleague, they have some level of V, M and C already and can decide based on this context. An AI has none of that shared context. Which is exactly the problem.
Throwing the Problem Over the Fence
When people start using AI tools, the typical first instinct is to just type the problem. Short, direct. “How should I structure this service?” or “What’s wrong with this test?”
And the AI responds. It always responds. And it’s not wrong. It’s just… generic. Because it has to be. It knows absolutely nothing about you, your system, your team, your constraints, your past decisions. All it has is the few lines you gave it. In case of coding agents, it has access to the code base and construct more context out of that.
You threw the problem over the fence, without providing much or any context. And now you expect wonders.
Enter Systems Thinking
In Systems Thinking, one of the first exercises when analyzing a situation is a Context Scan. You put your situation in the center and ask: what is around it? What is part of the picture even if it doesn’t seem obvious at first glance? Zoom out and see relations on a larger scale.
The idea is that you cannot understand an element without understanding the system it lives in. A microservice is not just a microservice. It is a piece of software with a team behind it, a deployment pipeline, a set of consumers, a data contract, a history of decisions, and constraints coming from outside. Take it out of that context and it becomes something generic and probably useless. Put it back in, and suddenly it becomes specific, with all its quirks, trade-offs, and necessities.
An AI has no idea about any of that. Unless you tell it.
Context is the System Prompt of Reality
When you provide context to an AI, you are essentially describing the system the solution has to live in. You are not just giving more information, you are drawing the boundaries. You are saying: here is what is relevant, here is what is not. Here are the actors, the constraints, the goals. Here is what we tried before and why it did not work.
Compare these two prompts:
“How should I structure my Auth module?”
vs.
“We run a multi-tenant SaaS platform, currently Auth is part of a Django monolith. The team is four engineers, experienced in Python and some Go. We want to move towards service isolation but have no dedicated platform team and limited ops capacity. What are realistic options for extracting Auth?”
Same question. Completely different answer. Not because the AI knows more, but because you have described the system and its constraints.
What Context Actually Means Here
Not every detail matters equally. The useful context dimensions for technical prompts are similar to what you would map in a Systems Thinking exercise:
Boundaries/Distinctions – What is in scope, what is not. What constraints are non-negotiable? Elements – What does your current system look like? Tech stack, team size, existing components? Relations – What depends on what? What would break if you changed this? Perspectives – What is the goal here? MVP, production-critical, proof of concept? Who is the user of this solution?
Sound familiar? It should. It is basically a light-weight systems analysis, just expressed as a prompt.
The Cost of Missing Context
Without context, the AI optimizes locally. It gives you a solution that looks fine in isolation, but might be completely incompatible with your situation. You implement it. Something doesn’t fit. You prompt again. The AI corrects. Something else doesn’t fit. Rinse, repeat.
This is the fixes that fail pattern from Systems Thinking. Local interventions that create new problems elsewhere, because the broader system was never part of the picture. You end up spending more time in a feedback loop than if you had taken five minutes to describe the situation upfront. Even then, some iteration is most probably inevitable.
This is Not New. You’ve Done It Before.
Any experienced engineer does this instinctively before making a decision. They ask about the context. What is the load? What is the team’s experience? What is the timeline? What have you already tried?
They don’t jump to a solution. They build the mental model first. That is Systems Thinking in practice, even if nobody calls it that.
When working with AI, the same principle applies. The model is not going to ask you follow-up questions unless you set it up to do so. You need to front-load the system description.
Set up your AI agents to ask for more context before they start. Like with every tool, the context makes it more useful.






