Invisible Blockers for AI Coding - Our Workflow

February 10, 2026 | 7 min Read

Everyone talks about context engineering and about writing the perfect prompt. But where AI coding often gets messy is right after we run that first prompt and the LLM generated a supposed solution.

The generated solution looks good on a high level, but then come the patches, the fixes, the regenerations. Each one adding context pollution, each one moving further from the original intent.

This happens even when your context engineering is solid. The problem starts after the first run, in the iterations that follow. That’s where bad habits sneak in. That’s where it stops feeling like engineering and starts feeling like damage control.

We explore these patterns in our latest video:

The Pattern Nobody Talks About

AI coding workflows have an awkward shape. We start with careful context engineering — clean prompts, clear requirements, well-structured plans. Then the agent generates something. And then… we wing it.

We patch. We iterate. We try slight variations. We mix in side-concerns. We keep sessions open for days because we’re afraid to lose what’s in there.

Let us show you the five rabbit holes we see in almost every team — and what actually connects them.

Rabbit Hole #1: The Same Fix Every Time

We’ve all been there. AI generates code with the wrong style. We fix it manually. Next generation? Same mistake. We fix it again. And again.

Wrong imports. Misunderstood architecture patterns. Style inconsistencies. Every time, we’re playing code whack-a-mole.

The instinct is to keep fixing the output. But the real problem isn’t the generation — it’s our project context files. Our .claude file. Our cursor rules. Our copilot instructions.

The exit isn’t patching. It’s meta work: updating the context that guides every future generation.

The block: Meta work doesn’t feel like real work. You’re focused on shipping a feature, not updating setup files. It’s like stopping mid-run to tie your shoes — technically the right move, but it breaks momentum.

What helps: Share the maintenance load with your team. Make project context files a shared artifact, not something each developer maintains alone. We covered this in depth in our project context video.

Rabbit Hole #2: The Bug Loop

AI couldn’t fix a bug on the first try. We tell it that didn’t work. It tries again. Still broken. We keep going. “That didn’t work either.” “Try this instead.”

It feels close, but that feeling is often a lie.

These endless refinement attempts stop us from thinking. We’re just chatting. The LLM gets distracted by its own previous attempts. Context pollution accumulates. Our local codebase fills with desperate patches.

If we’re not adding new information — actual debug outputs, our own investigation, a way for the LLM to verify the fix — we’re just watching it guess randomly.

What helps: Stop. Redo from scratch with fresh context and your own investigation. An easy hack to detect when the LLM is lost: ask it how confident it is that it can find the bug, on a scale from 1 to 10. It will justify its answer. If it’s below 9, it’s time to restart.

Rabbit Hole #3: The Massive Sloppy One-Shot

We ask an agent to generate a big feature. Not huge, but… big. The agent starts. And then starts forgetting things. Hallucinates details. Makes architectural mistakes.

Now we’re patching. “Remove this.” “Add that.” “Just change this one more thing.”

We end up with a messy result. Patch code. Remove leftovers and things that don’t quite fit. And we always underestimate the time spent testing, reviewing, and refining these franken-features.

What helps: Task engineering. Accept that the task was too big. Break it into smaller pieces instead of force-patching it through. This feels like redoing work. But it’s almost always faster — and definitely cleaner — than trying to salvage a generation that was doomed from the start.

Rabbit Hole #4: The Context Mix

We’re working on a feature. We notice a small side issue. We fix it in the same session. Feels efficient.

But now our context is polluted. The agent gets distracted. It makes weird decisions on our actual main task.

There’s an even more dangerous version: generating test cases in the same session that generated the code under test. The problem? That session has the same blind spots it had while writing the code. Only fresh context can catch the corner cases the original session missed.

What helps: Split out. Start new sessions. Get a fresh brain for new concerns. Most tools make this trivially easy. But we resist it because we’re afraid to lose what’s in the current context. That fear keeps us in polluted sessions — which defeats the whole point.

Rabbit Hole #5: The Freestyle Agent Loop

We start an agentic run. But the agent doesn’t choose the right execution steps. It skips test cases. Forgets to run the linter. Misses verification steps.

So we manually correct it in the next message. “Run the tests.” “Don’t forget the linter.”

Now we’re in a manual flow. Every agentic run becomes roulette. The agent might do something — or it might not.

We once had an agent decide to push every little change directly to GitHub just because we’d asked it to do that once. While that particular example was dangerous, the usual problem is just lower predictability. If the agent misses verification steps, you end up manually checking things the agent should have handled.

What helps: Define the execution loop upfront. Make verification steps explicit. Create a harness the agent can use to validate its own work. Put it in the plan before you start prompting — don’t let the agent freestyle.

Why We Keep Falling In

So why do these five patterns keep happening? What connects them?

Here’s what we’ve observed: we fall back to treating AI coding like chatting with a human — like a natural conversation that flows with a partner who has good memory.

This is a pattern we know from real life. It’s comfortable. It’s intuitive.

But it’s not true for LLMs. They don’t have good memory yet. They have context windows. And without active management, context rots fast.

That’s the shift. Not context only engineering — context management.

What Actually Works

Once you see this pattern, the solutions become obvious. But there’s a difference between understanding them intellectually and actually applying them when you’re deep in a coding session.

Here’s what turns these insights into practice:

Make context management visible and systematic.

That means three things working together:

First, externalize the stable parts of your workflow. Not the tools — those change weekly. But the patterns underneath: when to redo a prompt, when to split out a session, when to stop and update project context. The developers who join our trainings often say the same thing: “I knew most of this stuff, but I wasn’t applying it reliably.” Knowing and doing are different problems.

Second, use plans as living documents, not just initial setup. Most tools now have plan modes. But we treat them like initial blueprints — create once, then forget. The key is updating plans mid-session: for redos with changed requirements, for split-outs to handle side concerns, for refining execution loops. Plans aren’t just for the start. They’re your externalized memory throughout the session.

Third, treat project context files as shared team artifacts. This is where the meta work becomes sustainable. When one developer fixes the same style issue three times, they update the context file once — and everyone benefits. We’ve watched developers keep chat sessions open for weeks because they were afraid to lose what was in there. When tools start auto-compacting your sessions because there’s too much context, that’s your signal: externalize and start fresh.

These aren’t tips. They’re a practice. And like any practice, the value is in making it repeatable.

From Patching to Control

AI coding isn’t about chatting. It’s about context management — beyond the initial prompt.

The shift isn’t from AI assistance to perfect generations. It’s from patching to control. From winging it after the first run to having a systematic approach that scales.

If you or your team want to build systematic AI coding workflows — not just learn techniques, but make them stick in practice — we can help.

👉 Explore our AI Coding Training & Guided Adoption

💼 Follow us: EclipseSource on LinkedIn

🎥 Subscribe to our YouTube channel: EclipseSource on YouTube

Stay Updated with Our Latest Articles

Want to ensure you get notifications for all our new blog posts? Follow us on LinkedIn and turn on notifications:

  1. Go to the EclipseSource LinkedIn page and click "Follow"
  2. Click the bell icon in the top right corner of our page
  3. Select "All posts" instead of the default setting
Follow EclipseSource on LinkedIn

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …