Two Frameworks Walk Into the Same Room
On AI coordination, the Toyota production system, and the argument Sangeet Paul Choudary almost completes
Image Credit: Google Gemini, prompted by Srini Koushik
I want to start with a confession. I’ve been following Sangeet Paul Choudary’s work for a while. His platform economics thinking is the kind that holds up when you press on it, which is rarer than it should be. So when he published Reshuffle in July 2025, I read it closely.
My first reaction wasn’t “interesting new argument.” It was recognition. Someone had built a rigorous economic framework around something I’d been watching from the inside for years, and arrived at a strikingly similar place.
That’s worth unpacking. Not to claim credit. When two frameworks built from entirely different starting points converge, it usually means something real is underneath them.
Here’s where I’ve been coming from.
For several years, the core of my work has started with a simple observation: Silicon is evolving faster than Carbon. The machines are getting smarter faster than the humans working alongside them are developing the capacity to think with them. Not use them. Think with them. That distinction matters enormously, and I’ll come back to it.
That observation led to a second one. Most organizations aren’t making an AI adoption mistake. They’re making a system design mistake. They see a new capability and ask: what tasks can this handle? Which roles can we augment? How do we slot this into what already exists? It’s the same question General Motors asked in the 1980s when it poured forty billion dollars into industrial robots and preserved exactly the wrong things.
I’ve spent a lot of time with the Toyota Production System. I was introduced to it during my MBA in 1993, got to experience it with automotive clients when I was at IBM in the 90s and got to operationalize it it with the Columbus, OH based Nationwide Development Center (NDC) built on Lean Management Principles at Nationwide in 2006. On an interesting side note, this center was the first to use 100% Lean Software Development and be CMMi Level 3 Certified in North America (by 2008).
The core principle at the NDC was that TPS isn’t primarily about robots or automation. It’s about flow, about identifying where value actually moves, about eliminating the waste that accumulates when systems are designed around the wrong questions. Toyota’s insight wasn’t “these machines can replace workers.” It was “these machines change the logic of the entire production system. What does the system become if we design around the new capability rather than the old workflow?”
That’s a systems thinking question. It was the right question at the Nationwide Development Center and It’s also the right question for AI.
So when Choudary published Reshuffle in July 2025, then the Kyndryl piece in January 2026, then the HBR article in February, I read all three carefully. He was working the same territory from a different direction.
What Choudary Built
The central argument in Reshuffle: AI should be understood primarily as a coordination mechanism, not merely a tool for automation. Automation affects tasks. Coordination reshapes entire workflows, organizations, and economies. The real story isn’t compounding scale. It’s cascading coordination, where each solved coordination problem unlocks the next layer of opportunity.
He traces this through containerization. Standardized shipping containers didn’t just make ports more efficient. They cascaded into intermodal transport, global supply chains, just-in-time inventory, component specialization, and ultimately the semiconductor industry. The container solved one coordination problem. Everything else followed.
His AI argument runs the same logic. AI makes translation cheap and general, extracting structure from unstructured information, enabling coordination without requiring consensus on shared standards, tools, or workflows. Teams, systems, and data that previously couldn’t work together because their vocabularies didn’t match can now be combined without forcing agreement.
Then in the Kyndryl piece, he lands on Toyota directly. Which, given where I’d been coming from, felt like the frameworks shaking hands.
GM treated robots as compliant labor substitutes. Toyota asked what the entire system becomes possible when the capability enters it. They reconfigured layouts, redesigned work cells, shifted human workers from task execution to managing production lines, detecting variation, correcting flow, governing quality. Same robots. Completely different outcome.
His point: most organizations are making GM’s mistake with AI. Treating agents as digital co-workers. Slotting them into existing roles. Asking augmentation questions when they should be asking architecture questions.
This is the automation trap I’ve watched organizations fall into repeatedly. The instinct to reach for AI as a faster version of what already exists, rather than asking what becomes possible when the constraint changes. Choudary names the same trap from the economics side.
And then he goes further than most people writing about AI and humans dare to go. He explicitly rejects the augmentation frame: “Augmentation assumes the worker will always be at the center of the workflow.” As agentic capabilities improve, workers don’t get augmented. They get reallocated to the frontier, the boundary region where AI execution breaks down and human capabilities like judgment, interpretation, and governance create distinct value.
That’s an honest argument. It’s also where the two frameworks diverge.
The Frontier Problem
Choudary identifies the frontier clearly. The customer support rep handling the distressed customer who can’t articulate why they’re upset. The urban planner mediating between incompatible visions of what a city should be. The structural engineer evaluating what the AI coordination layer just flagged. He even uses the pilot: with autopilot handling the majority of flight miles, evaluating pilots on flight hours no longer makes sense. What matters is their ability to manage disruptions and the edge cases the automation wasn’t designed for.
He’s right about all of this. And here’s the question he doesn’t follow through on.
If pilots aren’t building judgment through the repetition of active engagement, because autopilot is doing the flying, what are they doing to stay sharp for the moments that matter?
The Toyota Production System answers this directly, even if TPS doesn’t use this language. Toyota didn’t just redesign the factory architecture. They redesigned how humans developed mastery inside it. Workers shifted from executing tasks to monitoring systems, reading signals, intervening with judgment rather than procedure. That wasn’t a role reassignment. It required deliberate investment in a different kind of capability, and a system designed to develop and sustain it.
At Right Brain Labs, we’ve been working on this problem from the human side. The Think-Do-Learn-Adapt loop is the core of how mastery gets built. In any domain, you develop capability by cycling through thinking, doing, learning from the result, and adapting. That loop is how a pilot develops instinct. How a surgeon develops procedural judgment. How a Toyota line worker learns to read a production system rather than just execute within it.
The challenge AI creates is specific: it makes it easy to get out of that loop. The analyst whose AI synthesizes signals before they see the raw data. The adjuster whose model resolves 85% of tickets before escalation. The engineer whose coordination layer flags design conflicts automatically. These people are nominally still in the loop. The question is whether they’re still in the Think-Do-Learn-Adapt loop.
That’s a different question. The answer determines whether the human value at Choudary’s frontier is real or assumed.
Cognitive Rust
There’s a name for what happens when the loop breaks down: cognitive atrophy. Documented in aviation, radiology, financial analysis, surgical training. The pattern is consistent. As AI handles increasing percentages of routine execution, humans stop practicing the underlying thinking that makes their judgment valuable at the frontier. The capability doesn’t disappear overnight. It erodes quietly.
Until the system fails at scale and the human who was supposed to be the backstop reaches for judgment they haven’t exercised in two years.
We call this the Cognitive Rust Belt. Not job loss. Capability loss. And it’s the risk that lives inside Choudary’s coordination argument, not because his argument is wrong, but because coordination without AI Fluency is a liability dressed as efficiency. The system gets faster. The humans inside it get weaker. Nobody notices until something goes wrong where it matters most, at the frontier, where human judgment was supposed to be the final layer.
The real danger with AI isn’t what it does. It’s what it lets you stop doing.
Choudary’s answer to the frontier problem is capability sensing and talent reallocation. Leaders developing better mechanisms to detect which human capabilities are rising or falling in value and repositioning people accordingly. That’s a structural answer. Correct as far as it goes.
Our answer is different. You cannot reallocate people to the frontier and assume capability follows. The humans being repositioned have to be capable of operating there, and staying capable as the frontier moves. That requires deliberate investment in AI Fluency: the practiced capacity to think with AI, not just use it.
Not a tool training. Not a prompt engineering workshop. A sustained practice of staying in the Think-Do-Learn-Adapt loop even as AI makes it easy to exit. The same way pilots stay sharp through simulation when autopilot does the flying. The same way surgeons maintain procedural judgment through deliberate exposure even as robotic systems handle more of the technical execution.
Toyota didn’t just redesign the factory. They redesigned how humans developed mastery inside it. That’s the piece that made the whole system durable. And it’s the piece Choudary’s framework stops short of.
The Completion
Choudary outlines three strategies for incumbents facing AI-driven coordination shifts: become the translation layer, double down on accountability, or fragment and tax. All three are structurally sound. All three require something his framework doesn’t fully address: humans who can operate at the frontier with genuine, sustained capability.
The organization that builds the translation layer without building the human capacity to govern it has optimized itself into fragility. The one that doubles down on accountability without people who can exercise real judgment under pressure is writing checks its people can’t cash.
This isn’t a counter to Choudary’s argument. It’s what the argument needs to be complete. The architectural redesign and the human capability development have to happen together. Neither works without the other. That’s the Toyota lesson he invokes, and the one the AI coordination argument needs to fully absorb.
Silicon is evolving faster than Carbon. That’s not an argument against building the coordination layer. It’s the strongest possible reason to be deliberate about what happens to the people inside it while you do.
Two frameworks. Different starting points. Same room. The question isn’t which one is right. It’s what you build when you take both seriously.



