Over the last few weeks I wrote a lot. About vibe coding. About outsourcing testing to AI. About the 10x productivity myth. About ownership, lock-in, and the bathtub that nobody is watching. I didn’t plan it as a series. But looking back, all of these posts are connected. They are all attempts to explain the current state of my mental model on AI. Especially raising the awareness for certain risks.
And I want to take a small series to explain the basics of my current favorite system thinking approach. Not just what I see when I look at the AI landscape, but which thinking processes lead to that state.
Information Is Not Understanding
We are drowning in information about AI. Every day. LinkedIn is full of it. YouTube is full of it. Everyone has an opinion, a framework, a prediction. The signal-to-noise ratio is terrible, and the noise is winning. But here’s the thing. Even if you filter perfectly and only consume the best, most relevant information, that alone will not give you understanding.
There is a formula that stuck with me since I heard it on the Cabrera Lab podcast by Drs. Derek and Laura Cabrera.
M = I × O
Your mental model (M) equals the information you have (I) multiplied by how you organize it (O). That’s it. And the interesting part is not the information. It’s the organization.
Let me give you an example. Think of two people reading the same article about coding agents. Same information. One person reads it, nods, files it somewhere in the back of their mind under “AI is cool” or “AI is dangerous”, depending on their existing bias. The other person reads it and starts asking questions. What exactly is a coding agent? What is it not? What are its parts? What system does it live in? Who benefits? Who loses? What relationships are at play? The same information, organized differently, leads to a completely different mental model.
When you master O, the I becomes almost secondary. You can take even a small amount of information and build something useful with it. And you can take a mountain of information and build nothing, if you lack the skill to organize it. O is the skill. O is what you practice. And most people never practice it on purpose. They just consume more I.
The Love Reality Loop
There is another concept from the Cabreras that I want to introduce here. It’s the engine behind everything I wrote recently. They call it the Love Reality Loop.
All models are wrong, but some are useful.
– George Box, 1976
The idea is simple but uncomfortable. Your mental model is never a perfect reflection of reality. It can’t be. Reality is too complex, and our models are always simplified. That’s fine. The question is: what are you pointing your model at? Are you trying to match reality? Or are you trying to match what you want reality to be?
The Love Reality Loop says: love reality enough to let it correct you. Accept it as the target. You don’t have to like what reality shows you. But you have to accept it. Two mechanisms make this work. There are feedback loops, where reality sends signals back to your model. Your code breaks. Your deployment fails. The customer complains. These are signals. And there are fit-back loops, where you actively adjust your model to better match what you observe. You change your assumptions. You redraw the boundary. You update your understanding.
When both loops work, your mental model gets better over time. It fits reality more closely. You make better predictions. You see risks earlier. You ask better questions.
And here is where the AI hype becomes a problem. Many people in this space have stopped fitting back. They fell in love with their model of AI, not with reality. The model says “10x productivity.” Reality says the bugs just arrive later and more expensively. The model says “we need fewer engineers.” Reality says the remaining engineers are drowning.
When you stop fitting back, your mental model drifts. And the further it drifts from reality, the worse your decisions become. The hype machine is essentially a broken Love Reality Loop at scale. Millions of people pointing their mental models at what investors, vendors, and influencers want reality to be, rather than reality itself.
So How Do You Practice O?
This is the practical part. Systems thinking, specifically the DSRP approach from the Cabreras, gives you the O. Four patterns of organizing information: Distinctions, Systems, Relationships, and Perspectives. I have written about these before, and if you want the basics, my earlier posts on the kitchen door example and the systems thinking introduction are a good start.
But there is a layer on top of the four patterns that I haven’t written about yet. The Cabreras and their research lab have identified what they call the “6 foundational mental moves.” Think of them as exercises. Like push-ups or sit-ups, but for your thinking. Research from Cabrera Lab shows that practicing these six moves can increase cognitive complexity. The idea is a Pareto principle for thinking: 20% of the effort gets you 80% of the results.
The six moves are:
Is/Is Not List (a Distinction move): Write down what something is and what it is not. Draw the boundary. Sharpen the edges.
Zoom In (a System move): Take something and break it into its parts. What is it made of?
Zoom Out (a System move): What is this thing a part of? What larger system does it sit in?
Part Party (a System + Relationship move): You’ve listed the parts. Now draw the relationships between them. How do they interact? Where are the feedback loops?
RDS Barbell (a Relationship + Distinction + System move): Take a relationship, turn it into a thing in its own right, and then examine its parts. Don’t just note that A connects to B. Crack open the connection itself.
P-Circle (a Perspective move): Lay out all the perspectives. Who is looking? From where? What do they see? And which perspectives are missing?
These are the six moves. And in the next six posts, I will take you through them one by one. I will name them explicitly. I will explain each move. And then I will apply it to the AI landscape as I see it from my perspective. With 25 years of experience watching hype cycles come and go.
Why This Series
This is me going meta. I have become quite fond of systems thinking and getting better at it. And I want to share what I have learned. For two reasons. To improve my own thoughts about systems thinking by writing about it. And to share what I have learned. With the chance that someone else might take away a bit or two from it.
If you’ve read my recent posts, you’ve seen the output of my thinking. The ownership problem. The bathtub that overflows. The fence that’s back. The lock-in catastrophe. This series will show you more behind the scenes. How I arrived at the mental model I had at that point in time.
Next up: Part 1, the Is/Is Not List. What is AI? And more importantly, what is it not?
