Systems Don’t Exist in a Vacuum – Zooming Out – The Six Moves – Part 3

This is Part 3 of a series where I apply six systems thinking moves to the AI landscape. In Part 2 we zoomed in and discovered six parts inside a coding agent. Now we reverse the direction.

The third move from the DSRP framework is Zooming Out. Instead of asking “what are the parts?”, you ask “what is this thing a part of?” What larger system does it sit in? What is around it? What does it depend on? What depends on it?

Zoom In and Zoom Out are two sides of the same coin. Together they give you the vertical axis of understanding. Down into the details, up into the context. In Part 2 we went down. Now we go up. And with a coding agent, there’s a lot of “up” to explore. So, let’s go.

Zooming Out Through the Technical Stack

In Part 2 we looked at the parts you interact with directly. The mode selector, the model dropdown, the prompt window, the context, the output, the review mode. But behind all of that sits a technical stack that you never see. Every time you hit enter on a prompt, a chain of things happens.

Your prompt leaves the plugin and travels through an API to a server you don’t control. This is a network call over the internet. Latency, availability, and data privacy are all in play now. Your code, or parts of it, leaves your machine. Depending on the provider’s terms, it might be logged. It might pass through infrastructure in a jurisdiction you didn’t choose.

On the other side of that API sits a processing layer. Your prompt goes through access control, safety filters, rate limiting, and more. There might be system prompts that the provider added, that you didn’t write and can’t see. The provider shapes the conversation before the model even starts generating.

Then there’s the model itself. The part everyone talks about. It predicts the most probable next tokens based on your input. It doesn’t understand your code. It produces statistically plausible output. Sometimes good, sometimes wrong, but always very confident. Non-deterministic, as I wrote about in earlier posts. And you can’t reliably predict which one you’ll get.

The model runs on infrastructure. Servers in a data center. The ginormous ones they talk about in the news. Managed by the provider or a cloud partner. GPU availability, load balancing, regional routing.

And underneath all of it sits the training data. Code from GitHub, Stack Overflow, documentation, books, and who knows what else. This is where licensing questions, intellectual property concerns, and pattern biases come from. If the training data over-represents certain frameworks or languages, the model will too. If it includes buggy code, the model learned from buggy code. You inherit all of that, invisibly. And depending on your contract, your code might feed the next training round.

That’s five layers you don’t see, don’t control, and mostly can’t inspect. And yet the output of all of these layers is what you accept or reject in your review mode. If you even use it.

Zooming Out Through Your World

Now let’s zoom out in the other direction. Not down through the technical stack, but up and outward from where you sit.

The coding agent is a plugin in your IDE. That’s where you interact with it. And where the coding agent gets access to the code. The code it changes or produces doesn’t stay there. It moves.

The code goes into a repository where it lives alongside other code. Repositories often follow certain conventions, patterns, and architectural decisions. The generated code has to fit in there. And other people will read it, build on it, and depend on it.

That repository is part of a product. A solution for some problem. The coding agent has no concept of the product. It doesn’t know the purpose, the user, the constraints. It generates code. Whether that code makes sense in the context of the product is entirely your problem.

The product is built by a team. People with different roles, different knowledge, different perspectives. The coding agent is used by some of them, maybe all of them. But it doesn’t participate in the team. It doesn’t join the standup. It doesn’t hear the discussion about why we decided against that approach last sprint. The team carries context that the agent never has.

Then there is the delivery processes. Code reviews, pull requests, CI/CD pipelines, test stages, approvals. In regulated environments like mine, these processes exist for good reasons. The coding agent doesn’t know about any of them. It produces code. What happens to that code afterwards, the reviews, the checks, the sign-offs, is invisible to the tool.

And at the end of that chain sits production. Real users. Real systems. Real consequences. The code that started as a prompt in a chat window is now running somewhere, doing something, affecting someone.

That’s six steps from the coding agent to production. Six steps where context gets added, where decisions get made, where things can go wrong. And the coding agent is aware of the prompt window and the code files and context you provide.

Why Both Directions Matter

When you zoom in, you understand the tool. When you zoom out, you understand the context. You need both.

In our example, zooming out can take different routes. The system we look at is most probably part of many other systems. A coding agent is not just a plugin. It’s a node in a network of technical, organizational, economic, and regulatory systems. And every one of those systems influences what happens when you hit enter.

Next up: Part 4, Part Party. We’ve identified the parts. We’ve seen the larger systems. Now we make the parts interact. How do they relate to each other? Where are the feedback loops? Where does it get messy?

Becoming AI Shepherds, and Who Will Grow the Next Generation of Testers?

Last week SmartBear launched BearQ, a team of AI agents that autonomously explore, learn, and test your applications. I watched the presentation and I have to say, it looks impressive. A QA Lead agent that parses your intent, hands off work to exploration agents, and comes back with results before your tea is ready. The future of testing is here, apparently.

And this is where my systems thinker brain starts itching.

BearQ was presented against the backdrop of a very real problem that I have written about myself quite a lot recently. AI coding tools are flooding codebases with massive amounts of code that barely gets reviewed. SmartBear’s own survey says 93% of teams have adopted AI coding tools, and 70% are concerned that quality is already suffering. Fair enough. That’s a legitimate problem worth solving.

But here’s what I can’t stop thinking about. We now have a tool that responds to the problem of “too much unreviewed AI output” by producing… more AI output. Test cases, en masse, generated autonomously. So who is reviewing those? Who is checking that these tests actually fail when something is wrong? Because that’s the bit that matters. A test that always passes is not a test. It’s a lullaby.

My approach to test automation has always been through exploration. I don’t just write a test, I combine exploratory testing with automation. By the time I’m done, I haven’t only verified that my test does the right thing. I’ve also investigated the architecture, looked at API payloads, inspected requests and responses, and understood how that thing actually works under the hood. The test is a byproduct of deep learning about the system. When I hand that process over to an AI agent, all of that is gone. I have no reason to go that deep into the technical details anymore. And those details are exactly where I used to find the things nobody else found.

I keep coming back to the metaphor of shepherding. Instead of maintaining test automation scripts, we’re now expected to shepherd AI agents. Guide them, watch them, correct them when they wander off. That sounds great on a slide deck. But shepherding requires deep knowledge of the terrain, the flock, and the weather. You don’t hand a shepherd’s crook to someone who’s never seen a sheep. So guess who gets the job?!

And this is exactly where the feedback loop breaks.

Dan Ashby has been writing on LinkedIn about the tipping point in Quality Engineering. He argues, and I think he’s right, that QEs become more important in this new world, not less. I mentioned this myself, but of course not that eloquently. If even half of his predictions hold true, Quality Engineers will spend less time on repetitive grunt work and more time on the things that actually matter: intent, risk, business context, system behavior. That’s genuinely exciting.

But here’s the uncomfortable question nobody seems to be answering: where do the next QEs come from?

Junior tester roles are vanishing. Junior dev roles are vanishing. The entry-level positions where people used to learn by doing, by breaking things, by sitting next to someone experienced and absorbing the craft, those jobs are disappearing. Companies are cutting costs and expecting AI to fill the gap. And at the same time, we need senior-level systems thinkers to shepherd these AI agents. People who understand context. Who can spot when a test suite is giving false confidence. Who know that “the application works” and “the application works as intended” are two very different statements. Who smell bad architecture.

How do you train someone into that kind of thinking without letting them go through the basics first? I wrote about Shu-Ha-Ri over a year ago: learn the rules, then bend the rules, then be the rule. Without Shu, there is no Ha. You can’t skip the apprenticeship and jump straight to master craftsman. That’s not how expertise works. It’s not how learning works.

Let me be honest. A lot of tester jobs that will vanish probably weren’t having much impact in the first place. The roles that were mostly about executing scripts and ticking boxes. But those roles were sometimes also the entry point. Or it was at least a good portion of the work day for some people growing into the position. The place where curious people discovered they had a talent for breaking things, for asking awkward questions, for seeing the system instead of just the feature. And Software Tester is nothing you learn in school or even university.

If we remove the bottom rungs of the ladder, we shouldn’t be surprised when nobody can reach the top.

BearQ and tools like it will change the economics of testing. That’s undeniable. The relevant skill set for testers is evolving fast, and it will be fascinating to watch where it lands. But while we’re busy building AI agents that test our software, maybe we should also think about how we’re growing the humans who will direct them. Because AI can explore an application. But it takes a human to understand why it matters.

Don’t automate away the nursery.

We Are Going In – Hold My Beer! Zooming In – The Six Moves – Part 2

This is Part 2 of a series where I apply six systems thinking moves to the AI landscape. In Part 1 we drew boundaries with the Is/Is Not List. Now we open the lid.

The second move from the DSRP framework is Zoom In. You take a thing and you ask, what are its parts? What is it made of? You break it down until you can see the components. This move is the antidote to treating something as a black box. And coding agents are treated as black boxes far too often.

The Cabreras, you should know them by now, the couple behind DSRP, used the example of a fire truck when they teach “Zooming In” to kindergarten kids. First they let kids draw a fire truck from memory. Then they show them in a short video how zooming in works. Then they look at a real fire truck and let them draw one again. The increase in level of detail is enormous. Because now the kids actually looked at the thing and what each element consists of. And so forth.

So let’s look at a thing. Not the abstract idea of “a coding agent.” The actual tool. Something like GitHub Copilot, sitting right there in your IDE.

The Parts

A simple mental model of a coding agent is: I type a prompt, I get code. One box. One arrow. But when you actually look at what’s in front of you, there are distinct parts. And each one influences the output.

The mode. It’s a drop-down. In Copilot, as in most others, you’re not using one tool. You’re choosing between different behaviors. Agent mode, Ask, Plan, Review, and more. Each mode does something different. Agent can read files, run commands, create code across multiple files. Ask is more like a conversation. Review focuses on existing code. And Plan is creating, well, a plan for an implementation. These are not the same thing. They carry different capabilities and different risks.

The model selection. Another drop-down. You can choose which model sits behind the agent. Claude, GPT, Gemini, etc. and different versions of each. Different models have different strengths, different blind spots, different tendencies. Some are better at reasoning, some at following instructions, some at generating boilerplate fast.
And don’t forget the cost difference!
I don’t know how many people actively choose a model for a specific reason. And then there’s “Auto” mode. What happens in Auto mode? Honestly, I’m not entirely sure. The tool picks a model for you, based on criteria you don’t control and probably can’t see. And you’re trusting the output either way. But this going a step to far, we come to that in Part 4.

The prompt window. A text field. This is where you describe what you want. It allows to use different special characters like /, # and @ to use different things in a special way. This is the input side, and I’ve written about it in “Context is the System.” If you skip the context, you skip the quality. The text field has grown in the past year. Writing more text is no longer an issue.
The special characters provide ways to call a skill, address context or explicitly name a file in context. So there are more parts within the part.

The context selection. A list of files or other references. The agent doesn’t see your whole project. It sees what you give it. Which files are attached. Which code is selected. Which instructions you provide in context files. What you don’t select, the agent probably doesn’t know about. Maybe it just emphasizes these files more. And it won’t tell you that it’s missing something, because it doesn’t know that it doesn’t know. This feature is still a bit brittle to use, so I don’t know how familiar people are with this.

The output. A textarea for presenting what the agent writes. Looks like a WhatsApp chat. When chatting with a very eagerly typing person. What the agent presents back to you. There’s two outputs even. The text the agent writes while “thinking”. Code suggestions, explanations, sometimes whole file structures. And then the written, changed or deleted code. This is what most people focus on. Does it look right? Does it compile?

The review mode. A list of files with more buttons and other infos. After the agent makes changes, there is a way to inspect what it did. Diffs, change summaries, modified files. This is where the human is supposed to catch problems. To read the code, understand the changes, and decide whether this actually does what was intended. It’s the one most people rush through or skip entirely. The review mode is your last line of defense. If you don’t use it properly, you’ve essentially handed control to a tool that doesn’t understand what it’s doing. But we are again taking a large step into Part 4.

And much more. For keeping it short I only list what we have not looked at. Settings, context files, skills, commands, and the account you use. And a few other things that directly influence just the way the coding agent looks and feels.

The Pattern

One IDE plug-in. Six identified parts. Each one a decision point. And most people make these decisions unconsciously, or don’t make them at all. They open the chat, type a sentence, hit enter, glance at the output, and accept it. The fire truck drawing before the zoom in. A red box with wheels and blue lights.

When zooming in, a coding agent looks like many other IDE plugins or UI components. Text field, drop downs, buttons. It’s a collection of interacting parts, each with its own influence on the result. And once you see the individual parts, you can start asking better questions and dig even deeper. What elements are there in the drop-down? Which mode was I in? Was there a better one? Which model was selected? What context was provided? What context was missing? Was there a skill (remember the “/”) I have missed to use?

When you know what’s there, you can use it. I know, IDEs provide a lot of options and plugins and windows and settings. But we talk about the coding agent. The tool that should make your life so much easier. So maybe worth looking a bit closer.
Zoom In helps to make you use of the tool more intentional.

Next up: Part 3, Zoom Out. We’ve looked inside the coding agent. Now we step back and ask: what is this thing a part of? What larger system does it sit in?

Are We Celebrating the Wrong Thing?

Last week, actually yesterday when writing this, SmartBear launched BearQ. An agentic QA system that autonomously explores and tests your application. About a year ago, the big news was that AI can write your code for you. So let me get this straight: AI writes the code, and now AI tests the code. What exactly are we still doing here?

If your answer is “prompting,” I think you’re celebrating the wrong thing.

And let me be clear: SmartBear’s move makes total sense. If you’re a tooling company in 2026, you need an offering like this to stay relevant. The market demands it. Autonomous testing is a logical evolution, and honestly, it’s a good thing. Removing tedious, repetitive verification work frees people up for more valuable activities. I have no beef with BearQ. But I do have a beef with what the industry concludes from announcements like this.

Let me take a step back. What is coding, really? It’s translating requirements into precise semantics that a machine can execute. And testing? That’s verifying those semantics actually match what someone expected. Both activities are, at their core, about the how. How do we build this? How do we make sure it works? And yes, AI is getting terrifyingly good at the how.

But here’s the question nobody seems to be asking loudly enough. Who makes sure we’re solving the right problem in the first place?

Let me give you an example. I’ve seen teams build entire features, beautifully coded, thoroughly tested, fully automated, that nobody needed. The requirements were wrong. Or rather, the requirements were never properly understood, because nobody bothered to talk to the people who actually had the problem. The code was good. The tests were green. And the solution was not used.

AI would have built that useless thing ten times faster.

This is where I think the industry is headed for a rude awakening. We’ve spent decades investing in the solution space: frameworks, languages, tools, pipelines, automation. And now AI is eating that entire space for breakfast. So if your value as an IT professional lives in the solution space, you should be worried. Not because AI will take your job tomorrow, but because the thing you’re good at is becoming a commodity.

The skills that matter now, and I mean really matter, live in the problem space. Can you understand what a business actually needs? Can you ask the right questions when a stakeholder gives you a vague, contradictory, half-baked requirement? Can you look at a system and see the relationships, the perspectives, the boundaries that nobody drew on a whiteboard? Can you tell the difference between what people say they want and what they actually need?

That’s not prompting. That’s thinking.

I’ve always argued that the real craft of testing is exploratory, not scripted. It’s the human ability to look at something and ask “but what if…” in ways that no predefined test case would cover. BearQ and tools like it will handle the repetitive verification brilliantly. But the moment you need someone to question whether the whole approach makes sense. You need a human who understands the domain, the context, the messy reality of the problem.

The same goes for what we used to call “development.” If AI writes your code, then the developer’s job isn’t coding anymore. It’s requirements engineering. It’s understanding complexity. It’s making distinctions between what belongs to the system and what doesn’t. It’s seeing the whole and the parts. It’s communicating clearly enough that the intent survives the translation. Whether that translation is done by a human or a machine.

You know what’s funny? These are the skills we’ve been calling “soft skills” for decades. Communication. Critical thinking. Domain knowledge. Empathy, even. We treated them as nice-to-have extras, the stuff you put at the bottom of a job description after listing seventeen programming languages and tools. Turns out, they’re the only skills AI can’t replace.

So here’s my challenge to you: stop celebrating that you don’t have to code anymore. Stop celebrating that you don’t have to write test scripts anymore. Start investing in understanding problems deeply. Learn to ask better questions. Learn to think in systems. Learn to challenge assumptions, including your own.

The future doesn’t belong to the people who are fastest at producing solutions. It belongs to the people who understand which problems are worth solving.

And no AI is going to figure that out for you.

The Death of the Vision, or How We Trade Purpose for Margins

Every company needs to make money. That’s not a vision, that’s survival. Having more income than spending is the baseline, the heartbeat that keeps the lights on. But a heartbeat alone doesn’t make you alive. You need a reason to get up in the morning.

And that’s what a vision is. It’s the reason people show up, stay late, argue in meetings, and care about the outcome. It’s the thing that turns a group of professionals into a team following a common goal. At least, that’s how it should work.

But something shifted. Investors want returns, and they want them fast. Healthy, constant growth? Too boring. Long-term strategy? Too risky. What they want is the quick buck. And once the investment capital flows in, the game changes. It’s no longer about building something meaningful. It’s about streamlining. Cutting costs. Maximizing margins. The vision quietly leaves the room, and nobody notices because the spreadsheet looks great.

Now, in software, what is the single greatest expense? People. And here comes AI, right on cue, promising faster output, shorter turnaround, and fewer salaries to pay. Companies are going all in. They see the potential to cut the workforce and call it “transformation.” What they actually cut is the very thing that held their company together.

Let me put this into a framework. Cabrera Labs developed the VMCL model: Vision, Mission, Capacity, Learning. A company starts with a vision. The missions are there to support and follow that vision. The capacity, your people and your technology, is what fulfills the missions. And learning is the feedback loop that keeps the whole thing adapting and improving. The driving force behind all of it are the people. They carry the vision. They execute the missions. They learn and adjust. Take them out, and the model collapses from the top.

Now you might say: “But we can put the vision into the AI’s context file! We can set goals, guidelines, values.” And sure, maybe that helps shape the output a little. But here’s the thing. An AI is a tool. A powerful, impressive, sometimes surprisingly smart tool. But it doesn’t give a shit about your vision. It doesn’t care if the company succeeds or fails. It doesn’t wake up motivated. It doesn’t argue in a meeting because it believes in a better approach. It processes tokens and returns text. Usually starting with: That is a great idea! The vision lives in the people behind the tool, not in the tool itself. The people that ask: But what if?

And here’s where it gets really ugly. When a company cuts a significant chunk of its workforce, what happens to the people who remain? They panic. Job security becomes the priority, not the mission, not the vision. You end up with a workforce that’s less engaged, more cautious, and focused on survival rather than purpose. The irony is brutal: the company killed the vision twice. First by choosing margins over meaning, then by destroying the motivation of the people who were supposed to carry what’s left.

So where does that leave us? Where are the companies with long-term visions that still value their people more than the next quarterly report? Being profitable is not the problem. Being and staying profitable is a strong argument, actually. The problem is when profit becomes the only argument. When every decision is filtered through “how do we cut costs” instead of “how do we build something that matters.”

If your vision can be replaced by a context file, it was never a real vision. And if your people are just a line item to be optimized away, don’t be surprised when there’s nobody left who cares.

Wake up. The vision is dying. And you’re holding the knife.

RAMmageddon – Will Scarcity Bring Back the Lost Art of Efficient Programming?

Remember when 32 gigabytes of DDR5 cost you around a hundred bucks? That was September 2025. Six months later, the same kit costs 350 dollars or more. If you can find one at all. DRAM prices rose by more than 150% throughout 2025, and they’re still climbing. Most of it only in the second half of the year. Some analysts call it “RAMmageddon.”

And here’s the root cause. It’s not because everyone suddenly decided to upgrade their laptops. It’s because the big tech companies, like NVIDIA and AMD, are hoovering up every memory chip they can get their hands on. They need them to equip these ginormous AI data centers. Open-ended orders, reportedly, meaning they’ll take as much supply as available, regardless of cost. When you’re competing with nation states and trillion-dollar corporations for the memory chips, guess who loses? You do. Because the chip manufacturers are focusing on delivering high-end memory to GPUs. Nearly no production lines are left to produce DDR5 modules.

The consequences are already showing up on store shelves. The Raspberry Pi 5 with 16 gigs nearly doubled in price within three months. HP’s CFO told investors that memory now accounts for 35% of their PC build costs, up from 15% just one quarter earlier. Some laptop makers are shipping mid-tier machines with only 8 gigabytes of RAM. That’s 2015 levels. Gartner projects that affordable laptops under 500 dollars may become financially unviable within two years. Which means a LOT of people cannot afford laptops sometime soon.

And this is where it gets interesting. In the vision of big tech, you won’t own your computing power. You’ll rent it from the cloud. We are back in to thin clients. Dumb terminals. A screen and keyboard or a touch screen, and a network connection. The heavy lifting happens in a data center you don’t control, running on memory chips you can’t afford.

If that sounds familiar, it should. I wrote recently about how AI is creating a dependency loop. We use AI, we lose skills, we depend more on AI, and the power shifts to the companies that control it. Now add this layer: the same companies driving up hardware prices are positioning themselves as the only affordable alternative. Can’t buy a proper laptop anymore? Get a thin client and rent computing power in the cloud.

But here’s the thing. And this is where I see a tiny, cautious silver lining. Scarcity has historically been the mother of brilliant engineering. The Atari 2600 had 128 bytes of RAM. Not kilobytes. Bytes. Programmers had to synchronize their code with the TV’s electron beam, line by line, sixty times a second. Super Mario Bros shipped on a 40-kilobyte cartridge. The original Macintosh ROM was 64 kilobytes of some of the most elegant code ever written. These people didn’t just program. They understood their machines like a woodworker understands grain.

We lost that art. Decades of cheap memory and abundant computing power taught us to be wasteful. AI-generated code makes it worse, producing bloated, over-engineered systems from day one. As IEEE senior member Nell Watson put it, there’s no feedback loop where AI learns from sluggishness. It optimizes for “working,” not for “efficient.”

So could RAMmageddon force us back to lean software? Could the scarcity of resources revive that instinct to write code that actually respects the machine it runs on? I’d love to say yes. And there are signs. Chinese developers, with less access to compute, are apparently obsessed with efficiency in ways their Western counterparts aren’t. Small language models are getting more attention. More and more people want to run LLMs at home, not sending their data to the US. The pressure is real.

But I’m realistic. The more likely outcome, at least in the short term, is that the big companies win again. They’ll push us toward thin clients, cloud subscriptions, and rented computing power. And most people will go along, because what choice do they have when a decent laptop costs a small fortune?

Still, I hold on to this thought that constraints produce creativity. They always have. And maybe, just maybe, somewhere in a garage or a dorm room, a programmer is looking at 8 gigabytes of RAM and thinking, “How do I make something amazing with just this?” That’s where the good stuff has always come from. Not from abundance, but from the stubborn refusal to accept that you need more.

Let’s see how that turns out.

PS: When I came up with the idea of this post, I was not aware of this article. But it was a nice confirmation of the gist that hit me on one of my morning walks.

The Domino Effect Nobody Sees Coming, or How AI Layoffs Will Ripple Through Everything

In this morning’s post, I wrote about how AI is accelerating consolidation in the IT industry. Companies are letting people go. Tool vendors are under pressure, both because of license fees and because AI can now replace entire product categories. And that pressure creates even more layoffs. It’s a self-reinforcing loop. But I didn’t talk about what happens next. And “what happens next” is where it gets really uncomfortable.

Let’s think about this as a system. Because that’s what it is.

You have a large number of highly paid professionals losing their jobs. We’re not talking about a few dozen people at a startup. We’re talking about thousands across the industry, globally. Software engineers, testers, product managers, DevOps specialists. People who earned well above average. People who built their lives around that income.

Now, what does a person do when they lose a high-paying job? They don’t immediately find a new one at the same level. Not in this market. Not when the very reason they lost their job is that AI made their role less valuable. So the job search takes months. Maybe longer. And during that time, something shifts.

They start saving. They cut expenses. They cancel subscriptions. They postpone the new car. They stop looking at houses. They eat out less. They skip the fancy clothes. They renegotiate, downgrade, hold on to what they have.

And this is where the domino effect begins.

Think about who sells those cars. Who builds those houses. Who runs those restaurants. Who designs those clothes. Each of these industries has its own employees, its own supply chains, its own dependencies. When a significant chunk of high earners suddenly pulls back on spending, those industries feel it. Not immediately, maybe. But steadily. Like a slow leak in a tire. You don’t notice it until you’re driving on the rim.

Let me give you an example. The German automotive industry is already under pressure from electrification, tariff wars and Chinese competition. Now imagine a wave of well-paid IT professionals in Munich, Stuttgart, and Berlin deciding they won’t buy or lease a new Audi or BMW this year. Or next year. Multiply that across Europe, the US, India. It’s not the majority of consumers, sure. But it’s a disproportionately high-spending segment. These are the people who bought the premium products, the organic groceries, the co-working memberships.

From a systems thinking perspective, this is a classic reinforcing feedback loop. AI reduces jobs. Fewer jobs mean less spending. Less spending puts pressure on other industries. Those industries cut costs. Which means more layoffs. Which means even less spending. And so on.

The tricky part? Nobody is looking at the whole picture. The tech press talks about AI layoffs. The automotive press talks about sluggish sales. The real estate press talks about cooling markets. But very few people connect the dots. Very few people see that these are not separate stories. They are the same story, viewed from different perspectives.

Of course the multitude of dumpster fires going on in this world right now have their own huge influence on all of this. So large that this effect I describe here is probably going more or less unnoticed. But it happens. The current fossil energy crisis in the Middle East has a leverage effect on much what I described.

Donella Meadows wrote about how the most dangerous dynamics in systems are the ones where delays hide the cause-and-effect connections. The layoffs happen today. The spending cuts happen over the next six months. The impact on other industries shows up a year from now. And by then, everyone is looking for explanations in their own silo. “It was the war.” “It must be the interest rates.” “It must be consumer confidence.” “It must be regulation.” Nobody points back to the reinforcing loop that started with a wave of AI-driven restructuring in tech.

I’m not saying the sky is falling. I’m saying that if you only look at the AI layoff story as an IT problem, you’re missing the bigger picture. This is an economic domino chain, and most of the dominoes haven’t fallen yet.

So pay attention. Not just to what’s happening in your industry. Look at the connections. Look at the relationships between the parts. Because the system doesn’t care whether you see it or not. It will keep doing what systems do.

The AI Efficiency Trap

There’s a story about Henry Ford. Ford supposedly wanted to pay his workers enough so they could afford to buy the cars they were building. Whether it’s true or not doesn’t matter. The point stands: your workers and your customers are often the same people.

Now look at what’s happening in tech right now. Atlassian just cut 1,600 jobs. Meta is planning to lay off around 15,000 people. Amazon has already cut 16,000 this year, with another 14,000 apparently on deck. Block slashed nearly 40% of its workforce. And every single one of them points at AI as the reason. “We’re restructuring around AI.” “AI enables smaller teams.” “This is about efficiency.”

And the stock prices go up. Investors love it.

But here’s where my systems thinking brain starts itching. Let me zoom out for a moment.

If you think about this through feedback loops, something uncomfortable emerges. Companies fire people because AI makes them more efficient. Fewer employees means fewer seats, fewer licenses, fewer subscriptions that the company needs to buy. Every tool vendor in the ecosystem suddenly has a smaller addressable market. And those vendors? They’re under the same pressure. So they fire people too. Which shrinks the market further.

This is a reinforcing feedback loop, and not the kind you want. More AI adoption leads to fewer jobs across the industry. Which leads to fewer seats sold. Which leads to less revenue for tool vendors. Which leads to those vendors cutting jobs, too. Which shrinks the market even further. Round and round.

And as a side effect, it makes the job market a horribly crowded place.

What fascinates me is the distinction problem. When a CEO looks at their workforce, they see cost. When they look at other companies’ workforces, they see customers. But those are the same system! Amazon fires 16,000 people. And that’s 16,000 fewer Slack accounts, Windows accounts, 16,000 fewer license packs somebody needs to pay for. Multiply that across every company doing the same thing right now, and you start to see the scale. The entire IT sector is shrinking its own customer base in real time.

And it doesn’t stop at the companies doing the firing. It’s the ripple effects across the whole ecosystem. Fewer IT workers means less demand for training providers, for certification platforms, for conference organizers, for recruiting firms, for the entire support industry that grew up around a large and growing tech workforce. Donella Meadows called these “system dynamics.” I call it: you can’t drain the pool and expect to keep swimming.

But here’s the thing. There’s a second feedback loop running in parallel, and it might be even more dangerous for tool vendors.

AI is now fast enough and good enough to build custom solutions. Need a small internal dashboard? AI can generate it in an afternoon. Need a workflow automation that used to require a $50-per-seat SaaS tool? You can prompt your way to a tailored version that does exactly what you need, nothing more, nothing less. No license fees. No vendor lock-in. No annual renewal negotiations. Personal experience: a colleague needed a feature and vibe coded it on top of a solution we already have. The next features are already discussed. And the need for a special tool or plugin just vanished.

This changes the relationship between companies and their tool providers fundamentally. The same AI that lets you fire half your workforce also lets your remaining employees replace the expensive software they used to depend on. Tool vendors don’t just lose customers because those customers got laid off. They also lose the ones who are still employed but no longer need the tool. Two reinforcing loops, both draining the same pool.

Think about it from Atlassian’s perspective for a moment. They cut 1,600 people to invest in AI. Their CEO wrote that the company has “momentum,” pointing to cloud revenue growth above 25% and over 600 customers paying more than a million dollars annually. And then, in the same breath, acknowledged that AI changes the headcount equation. But what if the AI they’re investing in is the same AI that makes their customers realize they don’t need Jira anymore? That a custom-built, AI-generated project tracker fits their small team better than a bloated enterprise tool? That’s not a hypothetical. That’s already happening.

According to my unreliable AI research, in 2025, nearly 250,000 tech jobs were cut. In 2026, we’re already past 50,000, and it’s only March. And this is only from the big announcements. So the real numbers will be much higher. These aren’t small numbers. Every single one of those positions was a seat in somebody’s SaaS product, a line item in somebody’s license agreement. And the ones who still have jobs? They’re building their own tools now.

The perspective shift here is critical. From the boardroom, this looks like smart strategy. From the system level, it looks like an industry collectively sawing off the branch it’s sitting on. Different viewpoints, radically different conclusions. And that’s the thing about perspectives in systems thinking. Neither view is wrong on its own. But only one of them accounts for the whole picture.

So here’s my uncomfortable question. What happens when the efficiency gains from AI are offset by both the market contraction from mass layoffs and the customers who no longer need your product because AI lets them build their own? What’s the endgame if everyone “wins” the efficiency race but there’s nobody left to sell to?

I don’t have the answer. But I think we should be asking the question a lot more loudly. Because right now, the people making these decisions are optimizing for a part of the system while ignoring the whole.

Think about it. Please.

PS: The base idea of this post came on Tuesday morning, when hearing about the news from Atlassian. I remember the exact location where I was on my morning walk. The full picture of what I describe here unfolded over the next about 150-200 meters. This escalated quickly.

What the Archaeologist Didn’t Find, or Systems Thinking and the Art of Seeing What’s Missing

I recently learned something about archaeology that opened my eyes. Mostly because I somehow missed this, while it’s clearly sitting in front of me. There’s a whole discipline around examining ancient waste dumps. Middens, they call them. Mountains of discarded shells, broken pottery, animal bones. And it hit me: most of what archaeologists dig up is stuff people threw away or left behind.

Think about that for a moment. We build entire narratives about past civilizations based on their garbage. Unless we get lucky and find a Pompeii, a place frozen in time by catastrophe. We’re mostly looking at what people didn’t care about enough to take with them. The things they valued? Those got repaired, reused, carried to the next settlement. Reused by the next generation. The things they cherished are gone.

And that’s a systems thinking problem hiding in plain sight.

When you dig into the lower layers of an ancient city, why would you expect to find a fully functioning household? You won’t. You’ll find broken things, lost things, discarded things. The system that produced those things, the daily routines, the decisions, the relationships between people, the trade networks, the knowledge passed from parent to child, all of that is invisible. You’re looking at a tiny fraction of the output and trying to reconstruct the whole.

Let me give you an example that brings this closer to home. Take a wooden bowl. A simple, beautiful, ancient wooden bowl sitting in a museum. What do you actually see? You see the wood that’s left. If it wasn’t preserved in a perfect environment, a lot will be lost that you can’t see anymore. Decorations, finish, and often you don’t even know what it was used for.
And if you zoom out: you don’t see the wood that was carved away. You don’t see the tree it came from, or who decided that this particular tree was right for bowls. You don’t see the techniques used to cut down the tree and prepare the bowl blanks. You don’t see the failed attempts, the ones that cracked or split or just didn’t feel right. You don’t see the tools that shaped it, the hands that held those tools, the years of practice behind those hands. You see one artifact. The system behind it is enormous and completely invisible.

This is what systems thinking tries to address. We need to see what we don’t see. Or at least, we need to be aware that what we see is never the full picture.

When we use DSRP thinking, we’re essentially training ourselves to look for the missing pieces. In practice you start a model simple with looking outcome A, or the action when A then B. Very simple. Distinctions help us ask: what am I actually looking at, and what am I not looking at? Systems force us to zoom out: what’s the whole that this part belongs to? And zoom in: what does it consist of? Relationships make us trace the connections: what led to this, and what does this lead to? How is it connected with “things” around it? And Perspectives remind us that the archaeologist’s view is just one view. The woodworker who made that bowl had a completely different understanding of it. The people using it.

The danger is that we forget we’re looking at outcomes of a much greater system. We treat the artifact as if it is the system. We see the bug and think we understand the quality problem. We see the test report and think we understand the product. We see the bowl and think we understand ancient life.

Next time you look at any artifact, whether it’s a piece of code, a test result, a process document, or yes, an old broken wooden bowl in a museum. Remind yourself: you’re seeing what’s left, not what was there. The interesting stuff is everything you can’t see. Train yourself to ask: what’s missing from this picture?

That’s where the real understanding begins.

Distinction – What Is? And Also, What Is It Not? The Six Moves – Part 1

This is Part 1 of a series where I apply six systems thinking moves to the AI landscape. Part 0 introduced the foundation: M = I × O and the Love Reality Loop. Now we get practical.

The first move is the Is/Is Not List. It’s a Distinction move from the DSRP framework. The idea is disarmingly simple. You take a thing, and you describe what it is. Then you describe what it is not. That’s it. You draw the boundary. You sharpen the edges. And in doing so, you often discover that the thing you thought you understood is much less clear than you assumed. Boundaries can be very fuzzy.

Let’s try it with “AI.”

The Mess We’re In

The term “AI” is used for everything. Your spam filter is AI. The recommendation engine on Netflix is AI. ChatGPT is AI. The anomaly detection running on a factory floor is AI. The coding agent that writes your React components is AI. A rules engine with three if-else statements and a marketing department is also, apparently, AI.

This is a problem. Because these things are fundamentally different. Different in how they work, how they fail, how you evaluate them, and what risks they carry. Lumping them together under one label makes it nearly impossible to have a useful conversation. When a vendor tells you their product is “AI-powered”, what does that actually mean? When your CEO says “we need an AI strategy”, which AI are they talking about? When a headline says “AI will replace developers”, what kind of AI do they mean?

Without an Is/Is Not list, you’re having a conversation where everyone uses the same word and means something different. And that’s not a conversation. That’s noise.

A Coding Agent: Is and Is Not

Let me apply this to the running example of this series. A coding agent.

A coding agent is a tool that generates code based on prompts and context, powered by a large language model. It is impressive at producing plausible-looking code fast. It is useful for boilerplate, for exploring approaches, for rubber-ducking a problem.

A coding agent is not a developer. It does not understand your architecture. It does not know why your team made certain decisions three years ago. It does not carry context from yesterday. It does not understand your regulatory constraints. It does not take responsibility for what it produces. It does not learn from your feedback in a lasting way. I wrote about this in my “Groundhog Day” post. You onboard the same junior every morning, and they’ve forgotten everything from yesterday.

Writing this down, in two columns, sounds almost too simple. But the effect is powerful. Because suddenly you see the gap between what the tool is and what people treat it as. And that gap is where the risk lives.

The Bigger Distinction

There is a broader Is/Is Not that I think the industry needs to make. Generative AI, meaning large language models, image generators, and the like, is not the same thing as classical machine learning. A predictive maintenance model running on sensor data in a factory has been trained on specific data for a specific task. It’s narrow, measurable, and deterministic. An LLM is broad, probabilistic, and non-deterministic. Same label. Completely different animal.

And then there’s the distinction most companies fail to make entirely. What is actually an AI problem in your organization? What is a data problem? What is a process problem that someone labeled as an AI opportunity because it sounds better in a board presentation? When your data is a mess, throwing an LLM at it doesn’t solve the data problem. It hides it. When your processes are broken, an AI layer on top creates the illusion of improvement while the root cause stays untouched.

The Is/Is Not list forces you to confront these differences. Not with opinions or gut feelings, but by writing it down. What is this thing? What is it not? Where is the boundary? For me the beauty in systems thinking is the objectivity. We come to the subjective side of systems thinking in the last post about perspectives.

Try It Yourself

Pick any concept from your current work that feels fuzzy. “Quality.” “DevOps.” “Agile.” “Technical debt.” “AI strategy.” Write two columns. Is. Is Not. You will be surprised how much clarity a few minutes of deliberate distinction-making can create.

Next up: Part 2, Zoom In. What’s actually inside the AI black box? What happens when you break it into its parts?

Semantics Are Underrated

My friend Damian Synadinos is talking and writing about communication a lot. In the most wonderful ways. With excellent humor. And what I learned from him more than 10 years ago is that semantics matter. When he gave a talk, he would start with defining the topic. Going into details of the definition. He clearly defined the “What Is”. Why? Common understanding. The audience or the reader would have a clear expectation about the topic and its boundaries. Is/Is Not. Not many people are doing this on the opener of their talks and articles. But I find, when they do, like Damian, it improves the overall experience.

Design a site like this with WordPress.com
Get started