This is Part 3 of a series where I apply six systems thinking moves to the AI landscape. In Part 2 we zoomed in and discovered six parts inside a coding agent. Now we reverse the direction.
The third move from the DSRP framework is Zooming Out. Instead of asking “what are the parts?”, you ask “what is this thing a part of?” What larger system does it sit in? What is around it? What does it depend on? What depends on it?
Zoom In and Zoom Out are two sides of the same coin. Together they give you the vertical axis of understanding. Down into the details, up into the context. In Part 2 we went down. Now we go up. And with a coding agent, there’s a lot of “up” to explore. So, let’s go.
Zooming Out Through the Technical Stack
In Part 2 we looked at the parts you interact with directly. The mode selector, the model dropdown, the prompt window, the context, the output, the review mode. But behind all of that sits a technical stack that you never see. Every time you hit enter on a prompt, a chain of things happens.
Your prompt leaves the plugin and travels through an API to a server you don’t control. This is a network call over the internet. Latency, availability, and data privacy are all in play now. Your code, or parts of it, leaves your machine. Depending on the provider’s terms, it might be logged. It might pass through infrastructure in a jurisdiction you didn’t choose.
On the other side of that API sits a processing layer. Your prompt goes through access control, safety filters, rate limiting, and more. There might be system prompts that the provider added, that you didn’t write and can’t see. The provider shapes the conversation before the model even starts generating.
Then there’s the model itself. The part everyone talks about. It predicts the most probable next tokens based on your input. It doesn’t understand your code. It produces statistically plausible output. Sometimes good, sometimes wrong, but always very confident. Non-deterministic, as I wrote about in earlier posts. And you can’t reliably predict which one you’ll get.
The model runs on infrastructure. Servers in a data center. The ginormous ones they talk about in the news. Managed by the provider or a cloud partner. GPU availability, load balancing, regional routing.
And underneath all of it sits the training data. Code from GitHub, Stack Overflow, documentation, books, and who knows what else. This is where licensing questions, intellectual property concerns, and pattern biases come from. If the training data over-represents certain frameworks or languages, the model will too. If it includes buggy code, the model learned from buggy code. You inherit all of that, invisibly. And depending on your contract, your code might feed the next training round.
That’s five layers you don’t see, don’t control, and mostly can’t inspect. And yet the output of all of these layers is what you accept or reject in your review mode. If you even use it.
Zooming Out Through Your World
Now let’s zoom out in the other direction. Not down through the technical stack, but up and outward from where you sit.
The coding agent is a plugin in your IDE. That’s where you interact with it. And where the coding agent gets access to the code. The code it changes or produces doesn’t stay there. It moves.
The code goes into a repository where it lives alongside other code. Repositories often follow certain conventions, patterns, and architectural decisions. The generated code has to fit in there. And other people will read it, build on it, and depend on it.
That repository is part of a product. A solution for some problem. The coding agent has no concept of the product. It doesn’t know the purpose, the user, the constraints. It generates code. Whether that code makes sense in the context of the product is entirely your problem.
The product is built by a team. People with different roles, different knowledge, different perspectives. The coding agent is used by some of them, maybe all of them. But it doesn’t participate in the team. It doesn’t join the standup. It doesn’t hear the discussion about why we decided against that approach last sprint. The team carries context that the agent never has.
Then there is the delivery processes. Code reviews, pull requests, CI/CD pipelines, test stages, approvals. In regulated environments like mine, these processes exist for good reasons. The coding agent doesn’t know about any of them. It produces code. What happens to that code afterwards, the reviews, the checks, the sign-offs, is invisible to the tool.
And at the end of that chain sits production. Real users. Real systems. Real consequences. The code that started as a prompt in a chat window is now running somewhere, doing something, affecting someone.
That’s six steps from the coding agent to production. Six steps where context gets added, where decisions get made, where things can go wrong. And the coding agent is aware of the prompt window and the code files and context you provide.
Why Both Directions Matter
When you zoom in, you understand the tool. When you zoom out, you understand the context. You need both.
In our example, zooming out can take different routes. The system we look at is most probably part of many other systems. A coding agent is not just a plugin. It’s a node in a network of technical, organizational, economic, and regulatory systems. And every one of those systems influences what happens when you hit enter.
Next up: Part 4, Part Party. We’ve identified the parts. We’ve seen the larger systems. Now we make the parts interact. How do they relate to each other? Where are the feedback loops? Where does it get messy?