{"id":34092,"date":"2025-04-02T10:58:08","date_gmt":"2025-04-02T14:58:08","guid":{"rendered":"https:\/\/sep.com\/?p=34092"},"modified":"2025-04-02T10:58:08","modified_gmt":"2025-04-02T14:58:08","slug":"vibe-coding-evolved","status":"publish","type":"post","link":"https:\/\/hexboxdev.wpenginepowered.com\/blog\/vibe-coding-evolved\/","title":{"rendered":"Why Vibe Coding Fails &#8211; and How Signal Coding Fixes It"},"content":{"rendered":"\n<blockquote>\n<p>AI is an <em><strong>amplifier <\/strong><\/em>for whatever <em><strong>signal <\/strong><\/em>we humans provide. Garbage in, garbage out. Signal in, software out.<\/p>\n<\/blockquote>\n<p><em>&#8211; Robert Herbig, in the thing you&#8217;re currently reading<\/em><\/p>\n<p>If you haven&#8217;t heard of <em><strong>vibe coding<\/strong><\/em>, it&#8217;s an interesting concept: you prompt an AI agent to do all the coding and focus on fast iterations. Minimize time from thought to working application. Don&#8217;t worry about the code itself, only the application produced. I had to try it, just to see what the fuss was about. At first, it was incredible: the AI gave me working UIs, full features, even decent front-end styling. No scaffolding, no boilerplate, just results. An application that matched my vision with minimal effort.<\/p>\n<p>The code was a mess: no structure, no reuse, no best practices. But who cares? The app worked.<\/p>\n<p>Then I tried to change something and everything fell apart. The AI got stuck in what I call an <strong>entropy loop. <\/strong>Vibe coding stopped producing results and making progress. Each fix introduced new bugs or broke something else. I was knee-deep in brittle, incoherent code I didn\u2019t write and didn\u2019t want to debug.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-34096 aligncenter\" src=\"https:\/\/hexboxdev.wpenginepowered.com\/wp-content\/uploads\/2025\/04\/vibe_coding_entropy_loop-300x175.png\" alt=\"\" width=\"679\" height=\"396\" srcset=\"https:\/\/hexboxdev.wpenginepowered.com\/wp-content\/uploads\/2025\/04\/vibe_coding_entropy_loop-300x175.png 300w, https:\/\/hexboxdev.wpenginepowered.com\/wp-content\/uploads\/2025\/04\/vibe_coding_entropy_loop-1024x598.png 1024w, https:\/\/hexboxdev.wpenginepowered.com\/wp-content\/uploads\/2025\/04\/vibe_coding_entropy_loop-768x449.png 768w, https:\/\/hexboxdev.wpenginepowered.com\/wp-content\/uploads\/2025\/04\/vibe_coding_entropy_loop-1536x897.png 1536w, https:\/\/hexboxdev.wpenginepowered.com\/wp-content\/uploads\/2025\/04\/vibe_coding_entropy_loop-2048x1196.png 2048w\" sizes=\"auto, (max-width: 679px) 100vw, 679px\" \/><\/p>\n<p>Vibe coding got me velocity, but not stability or sustainability.<\/p>\n<p>Once I cleaned up the code (introduced structure, patterns, tests), the AI agent came alive again. Suddenly it could extend features, fix bugs, even refactor, all with minimal direction. That\u2019s when it clicked: these agents thrive on clean code, just like human developers.<\/p>\n<p>If I could keep the speed and inject intent, architecture, and quality, I could get the best of both worlds. That\u2019s <em><strong>Signal Coding<\/strong><\/em>.<\/p>\n<h1>Vibe Coding and the Drift Into Entropy Loops<\/h1>\n<p>Vibe coding is optimized for speed. We describe what we want, and the AI builds it. Minimal time spent on setup or planning, the AI will figure it out. It\u2019s pure iteration &#8211; we keep prompting until we get what we\u2019re after.<\/p>\n<p>This works well for early exploration: discovery, prototyping, validating an idea, generating UI scaffolds, or creating quick backends. We\u2019re not worried about code quality or structure and are entirely focused on visible results. And for short bursts, that tradeoff makes sense.<\/p>\n<h3>Self-reinforcing Entropy Loops<\/h3>\n<p>In practice, this approach doesn&#8217;t scale. What makes vibe coding powerful in the first few prompts is exactly what makes it fragile over time. The more we build this way, the faster the system drifts into <strong>self-reinforcing entropy loops<\/strong>. These AI systems fundamentally rely on clarity and structure to function effectively, yet the code produced through vibe coding inherently lacks both. Each round of changes increases the confusion, making it progressively harder for the AI to navigate its own creation.<\/p>\n<p>There are two broad reasons vibe coding breaks down:<\/p>\n<ol>\n<li><strong>The AI can\u2019t see enough of the system<\/strong>. Its context is narrow and stateless. It works locally, not holistically. This causes breakdowns that may improve with longer context windows, better memory, or improved agent-based tools.<\/li>\n<li><strong>The AI isn\u2019t capable of creating code it can reason about over the long-term<\/strong>. Even if it writes working code, it doesn\u2019t lay foundations it can build on. There\u2019s no continuity, no internal consistency, no long-term structure. The result is a codebase that slowly becomes unworkable, even to the agent that created it.<\/li>\n<\/ol>\n<p>I&#8217;ll explain these in more detail later.<\/p>\n<h2>What Is Signal Coding (Really)?<\/h2>\n<p>Signal Coding is how we make vibe coding work past the first few prompts. Where vibe coding prioritizes output, Signal Coding prioritizes continuity. It\u2019s a collection of practices that help the AI produce systems it can still reason about later. We\u2019re not asking the AI to write perfect code, we\u2019re just giving it the conditions it needs to succeed over time.<\/p>\n<p>That means helping the AI build within a structure, keeping naming and abstractions consistent, prompting in smaller units, and resetting context between tasks. These aren\u2019t heavyweight processes. They\u2019re ways of injecting just enough signal to prevent drift. If vibe coding is about speed, Signal Coding is how we keep that speed from turning into churn.<\/p>\n<h2>Core Practices of Signal Coding<\/h2>\n<p>Signal Coding isn\u2019t about slowing down or reintroducing heavyweight processes. It\u2019s about adding just enough structure to keep the AI grounded. These practices aren\u2019t theoretical, they\u2019re practical ways to make sure the code we generate remains usable, extendable, and comprehensible, even as we move fast.<\/p>\n<h3>Plan and Structure First<\/h3>\n<p>We don\u2019t start with code. <strong>We start with a plan<\/strong>. Before we prompt, we discuss options with the AI, decide on an approach, and document those decisions. A simple <code>PLAN.md<\/code> file becomes the anchor for our design choices: naming, responsibilities, system boundaries, feature order, notes, etc. These plans don\u2019t have to be just text. Visuals, especially <a href=\"https:\/\/mermaid.js.org\/\">Mermaid diagrams<\/a>, can be a powerful addition. Humans benefit from seeing structure laid out visually, and AIs understand that format unusually well. It\u2019s one of the rare cases where the same artifact works equally well for both.<\/p>\n<p>Most crucially, <strong>we must keep it up to date as we go<\/strong>, which is easily done by telling the AI to update it after each change or commit. This file also serves as an efficient way for the AI to restore its context and \u201cremind itself\u201d where we left off.<\/p>\n<p><strong>This planning can be fractal<\/strong>. We do it at the feature level, but also within smaller scopes such as new components, tricky functions, or design pivots. At each level, a bit of structure makes the next step easier to express, for both us and the AI. The AI will easily understand nested <code>PLAN.md<\/code> files.<\/p>\n<h3>Prompt Intentionally<\/h3>\n<p><strong>We prompt for small, focused changes<\/strong> &#8211; roughly the size of a clean commit. Each step should be easy to understand, easy to review, and easy to course-correct. This incremental flow helps us shape the system without stepping out of our architectural mindset. We stay high enough to guide design and behavior, but close enough to see what\u2019s actually changing.<\/p>\n<p>When we describe a change, we don\u2019t just say what we want, we also say <strong>how we want it structured<\/strong>. We describe boundaries, layers, and intent. The AI will still generate the code, but it\u2019s doing so inside a frame we\u2019ve chosen. This can often be simplified or omitted if we\u2019ve discussed the plan and design ahead of time with the AI (such as the aforementioned <code>PLAN.md<\/code> file).<\/p>\n<p><strong>Treat each new feature or task as a fresh context<\/strong>. Just like we mentally reset when we move on to a new task, we need to reset the AI\u2019s context. A new session helps avoid context drift and prevents unrelated decisions or code from earlier prompts from interfering with the current task.<\/p>\n<h3>Inject Durable Signal<\/h3>\n<p><strong>Define names with intent<\/strong>. That doesn\u2019t mean naming every variable ourselves &#8211; it means introducing meaningful project-specific concepts and domain-specific terms to the AI early in the process, so it has clear anchors to build on. When the model sees those concepts used with clarity and consistency, it\u2019s more likely to reuse them correctly across files and features. This reinforces structure without requiring us to manage every detail.<\/p>\n<p><strong>Encourage the AI to use abstraction early<\/strong>. If we see the AI repeating logic, we must nudge it to extract helpers or reusable components. Even light scaffolding helps prevent drift later.<\/p>\n<p><strong>Use types and tests as constraints<\/strong>. These don\u2019t just validate behavior &#8211; they reinforce expectations. They\u2019re part of the prompt history, and they shape how the AI reasons about the system.<\/p>\n<p><strong>Comments and documentation matter, too<\/strong>. We must not overdo them, but include enough to explain why things exist and how they\u2019re meant to be used. Inline docstrings, file headers, and light <code>README.md<\/code>\u00a0notes all help stabilize the AI\u2019s understanding of the system.<\/p>\n<h3>Stabilize and Refactor<\/h3>\n<p><strong>We don\u2019t wait for the code to break before cleaning it up<\/strong>. Refactoring is how we reinforce the patterns we want the AI to follow. If a function name is ambiguous or overloaded, tell the AI to clarify it. If logic is scattered or duplicated, tell the AI to consolidate it. These aren\u2019t just maintenance tasks, they\u2019re opportunities to clarify our signal. We can even discuss refactoring options with the AI, exploring tradeoffs before committing to a change. That dialogue helps align the model with our intent and keeps us in control of the system\u2019s shape.<\/p>\n<p><strong>Manage the code&#8217;s surface area<\/strong>. We collapse one-off experiments once we\u2019ve chosen a direction, clean up abandoned files, and remove unused code. The more we reduce noise, the more clearly the AI can hear our intent.<\/p>\n<h2>Why Vibe Coding Fails Over Time<\/h2>\n<h3>Failures Driven by Limited Context<\/h3>\n<p>Some problems with vibe coding are caused by the AI\u2019s limited context. These are failure modes we can expect to improve as models gain broader memory, better tool use, and long-term reasoning capabilities.<\/p>\n<p>The first is a bias toward <strong>fix-it-where-you-see-it<\/strong>. The AI makes changes where the problem appears, not where it originates. It applies local fixes instead of systemic ones. It rarely steps back to consider architectural implications or broader effects. This is a direct result of narrow and mostly stateless context. Each prompt is a short-term reaction, not a holistic adjustment.<\/p>\n<p>The second is <strong>inconsistent use of abstractions and reuse<\/strong>. Sometimes the AI writes good helpers or modular components. Other times it reimplements logic that already exists, slightly differently, in another file. Whether it reuses existing code often depends on whether that code is visible in the current context window. When it\u2019s not, the AI reinvents: often inconsistently and failing to keep duplicated code in sync.<\/p>\n<p>These issues are real, but they\u2019re not fundamental. With broader memory, better indexing tools, or agentic workflows, we can reasonably expect these limitations to diminish. But even if they\u2019re solved, they won\u2019t be enough on their own.<\/p>\n<h3>Failures Intrinsic to Vibe Coding<\/h3>\n<p>Other failures aren\u2019t just about limited context, they\u2019re simply baked into the approach. Vibe coding optimizes for fast, one-shot results. That mindset, by default, produces systems that break down under pressure.<\/p>\n<p>The biggest issue: <strong>the AI will make the code work &#8211; at any cost\u2014<\/strong>\u00a0<\/p>\n<ol>\n<li>patches over problems instead of solving them<\/li>\n<li>hardcodes values to make tests pass<\/li>\n<li>mocks things that shouldn&#8217;t be mocked<\/li>\n<li>builds fragile layers of workaround logic, because that\u2019s the fastest way to produce a passing output.<\/li>\n<\/ol>\n<p>These decisions aren\u2019t bugs, rather they\u2019re a side effect of asking the AI to prioritize results over reasoning or long-term stability.<\/p>\n<blockquote>\n<p>AI doesn&#8217;t know anything. It simulates understanding. What is actually wants is approval.<\/p>\n<\/blockquote>\n<p style=\"text-align: left\">&#8211; <em>Jesse James Garrett<\/em><\/p>\n<p>Vibe coding also promotes a kind of <strong>structural drift<\/strong>. It lacks the perspective of a software architect, and without a persistent design philosophy, there\u2019s no plan, no intentional layering, no shared metaphors across modules. Even if the AI generated each part successfully, the system as a whole becomes harder to reason about. Every change increases the odds of breakage or contradiction.<\/p>\n<p>And crucially: <strong>these problems don&#8217;t go away with more context<\/strong>. You can give a model the entire codebase and it will still optimize for the shortest path to a working result. That\u2019s what we\u2019re asking it to do. Without guardrails or architectural signal, it will solve local problems and accumulate global debt.<\/p>\n<p>This is why entropy loops form and why they\u2019re inevitable without a change in our workflow.<\/p>\n<h3>Symptoms of the Entropy Loop<\/h3>\n<p>As we go deeper into a vibe-coded system, certain failure modes may start to appear. These are signs that the system is beginning to resist change\u2014that the AI is struggling to navigate the very code it created:<\/p>\n<ol>\n<li><strong>The AI starts to misunderstand its own output.<\/strong> It misinterprets variable and function names, revisits or undoes changes it already made, and cycles through attempts that never quite resolve the issue. We see it get stuck, changing things just to change them, with no meaningful progress. This kind of churn burns dollars and time that could be better spent if we avoided this pitfall.<\/li>\n<li><strong>It begins to duplicate logic instead of reusing it.<\/strong> Even when equivalent code already exists, the AI reimplements it in slightly different ways. This usually happens when relevant logic is outside the current context, but even when it isn&#8217;t, the model can fail to use its own prior work. The result is subtle violations of DRY that fragment the codebase and make future changes harder to coordinate.<\/li>\n<li>Even once we recognize that we&#8217;re in an entropy loop, <strong>it&#8217;s often easier to delete the code than to fix it<\/strong>. The AI struggles to clean up its own mess. It can&#8217;t refactor the broken logic it produced earlier, because it no longer understands how the pieces fit together. We can prompt it to fix things, but the result is usually more churn. At that point, starting over is often the best path forward.<\/li>\n<li><strong>Prompting becomes an exercise in micromanagement<\/strong>. We spend more time crafting fragile, overly specific prompts than building features. Instead of working with the AI, we\u2019re fighting it &#8211; trying to nudge it toward something coherent without breaking everything else. This completely breaks the feedback loop that makes vibe coding powerful. Instead of staying focused on high-level strategy and iteration, we&#8217;re forced back into low-level debugging and reactive cleanup.<\/li>\n<\/ol>\n<p>These symptoms don\u2019t always arrive all at once, but when they start to cluster, we\u2019re in an entropy loop. The code is no longer helping the AI think, it\u2019s getting in the way.<\/p>\n<h2>Signal Coding: Breaking the Entropy Loop<\/h2>\n<p><strong>Vibe coding<\/strong> unlocks speed, but without structure, that speed turns into churn. We start fast, but we stall. <strong>Signal Coding<\/strong> is how we keep moving.<\/p>\n<p>This isn\u2019t about slowing down. It\u2019s about avoiding <strong>entropy loops<\/strong>. Staying strategic. Staying architectural. With a few lightweight practices we give the AI what it needs to keep helping us. We keep our systems navigable, testable, and extendable, even as we move fast.<\/p>\n<p>You don\u2019t need to adopt all of these practices at once. Start small: document your next feature in a <code>PLAN.md<\/code>, prompt for a single focused change, code review the result, refactor, and reset the AI context when switching tasks. Even just these habits will make vibe coding far more stable, predictable, and scalable.<\/p>\n<p>Once you feel that loop open up again &#8211; fast, clear, focused &#8211; you\u2019ll never want to go back.<\/p>\n<p>If you give it a try, I\u2019d love to hear what works for you and what doesn\u2019t!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI is an amplifier for whatever signal we humans provide. Garbage in, garbage out. Signal in, software out. &#8211; Robert Herbig, in the thing you&#8217;re currently reading If you haven&#8217;t heard of vibe coding, it&#8217;s an interesting concept: you prompt an AI agent to do all the coding and focus on fast iterations. Minimize time [&hellip;]<\/p>\n","protected":false},"author":71,"featured_media":34099,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","associated_team_member":0,"associated_user_id":0,"footnotes":""},"categories":[288,281,268],"tags":[287,463,382],"service":[],"class_list":["post-34092","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-practices","category-programming","tag-ai","tag-generative-ai","tag-programming"],"_links":{"self":[{"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/posts\/34092","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/users\/71"}],"replies":[{"embeddable":true,"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/comments?post=34092"}],"version-history":[{"count":0,"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/posts\/34092\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/media\/34099"}],"wp:attachment":[{"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/media?parent=34092"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/categories?post=34092"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/tags?post=34092"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/hexboxdev.wpenginepowered.com\/wp-json\/wp\/v2\/service?post=34092"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}