<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://jsr6720.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://jsr6720.github.io/" rel="alternate" type="text/html" /><updated>2026-03-01T19:58:40+00:00</updated><id>https://jsr6720.github.io/feed.xml</id><title type="html">James’ Thoughts</title><subtitle>The digital journal of James Rowe. Something between WordPress, LiveJournal and gists.</subtitle><author><name>James Rowe</name></author><entry><title type="html">The Ownership Cost of AI Clones</title><link href="https://jsr6720.github.io/ownership-costs-of-ai-clones/" rel="alternate" type="text/html" title="The Ownership Cost of AI Clones" /><published>2026-03-01T19:25:33+00:00</published><updated>2026-03-01T19:25:33+00:00</updated><id>https://jsr6720.github.io/ownership-costs-of-ai-clones</id><content type="html" xml:base="https://jsr6720.github.io/ownership-costs-of-ai-clones/"><![CDATA[<p>Twenty-five years ago, Joel Spolsky wrote that the single worst strategic mistake any software company can make is <a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/">rewriting the product from scratch</a>. Now, with AI, the prevailing opinion is that companies will clone SaaS products internally, displacing the market.</p>

<p>But cloning functionality is the easy part. Ownership is not. Even if the cost of generating entire products goes to zero, the ownership cost doesn’t. Cloning with AI is only a good deal if you’ve budgeted for what comes next.</p>

<p>Regardless of <em>how</em> functionality is acquired, whether we build or buy, the question is the same: “Do we have time to own this?”</p>

<h2 id="the-grand-rewrite-now-with-ai">The Grand Rewrite, Now with AI</h2>

<p>I tested the hypothesis that AI-generated clones of existing software have lower total cost of ownership by cloning the functionality of and adding features to <code class="language-plaintext highlighter-rouge">rtCamp/action-slack-notify</code>.</p>

<p><em>Total time spent: ~2 weeks</em><sup id="fnref:models"><a href="#fn:models" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>

<p>I started with the prompt “I want to build a GitHub action that can be used to send messages to Slack replicating the functionality of <code class="language-plaintext highlighter-rouge">rtCamp/action-slack-notify</code>.” I then spent approximately 20-30 hours iterating and “copying” the <em>already built solution</em> with small value-added customizations.</p>

<p>The plan:</p>

<ol>
  <li>Replace multiple in-use patterns with one tool.</li>
  <li>Tackle a small-ish utility. Larger than, say, <code class="language-plaintext highlighter-rouge">left-pad</code>, but smaller than a ticket system.</li>
  <li>Add some small enhancements.
    <ul>
      <li>Default to Slack app instead of Webhooks (deprecated).</li>
      <li><code class="language-plaintext highlighter-rouge">dry-run</code> and <code class="language-plaintext highlighter-rouge">debug</code> modes that would be specific to our Slack instance.</li>
      <li>Custom template for our release notices.</li>
      <li>New feature: Single-step definition with dual channel and success/failure message.</li>
    </ul>
  </li>
</ol>

<p>Not only did AI take almost a week to successfully replicate, but in typical AI fashion, it created huge mega-files that were totally indecipherable by humans. Once I got the basic functionality in place and added in the new features, I then had to test it and iron out all the bugs.</p>

<p>Now I have a “product” that is completely divorced from any OSS improvements or visibility and which transfers the burden of ownership internally. The question was never whether AI could generate it—it’s whether I could afford to own it. Cloning was faster than before AI, but it came nowhere close to just forking and configuring.</p>

<h2 id="the-costs-of-just-clone-it-with-ai">The Costs of “Just Clone It with AI”</h2>

<blockquote>
  <p>The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. –Joel Spolsky</p>
</blockquote>

<p>The irony in this specific example is that AI <em>still</em> recommends <code class="language-plaintext highlighter-rouge">rtCamp/action-slack-notify</code> when discussing implementing Slack notifications via GitHub actions. So, to use AI to create an internal solution is to fully embrace the worst forms of “not invented here” syndrome.</p>

<h3 id="the-opportunity-cost">The Opportunity Cost</h3>

<p>The biggest downside of this entire experiment was feeling “What else could I be building?” Frankly, cloning means always being one step behind the competition. The time spent building and maintaining a clone is time missed on building differentiating features or expanding distribution. Plus, with a clone, I now have to convince teams to slow down their actual work to adopt this for their respective domain areas.</p>

<h3 id="the-review-and-maintenance-tax">The Review and Maintenance Tax</h3>

<p>The resulting code is not trivial. It is roughly a few thousand lines of code with unit tests split across 4-5 core files. This is a repo that had to be set up, configured, and evangelized. While the concept is sound and people could appreciate the additional flexibility and tight integration with our Slack instance, it was again time spent reviewing code that could’ve been better spent elsewhere.</p>

<p>Even if, in the future, time to generate clones of libraries approaches zero, you’re transferring the maintenance and ownership internally without benefiting from the classic advantages of an OSS ecosystem. Case in point, I missed one edge case on text parsing. Guess who got asked to fix it. That’s right: me. When employing OSS/vendor solutions, the limitations of the software are accepted or a source ticket is opened.</p>

<p>We’ve seen this before: the sprawl of Access databases across the enterprise. Teams couldn’t get allocated engineering hours so some clever analyst built a mission-critical process based on an Access database sitting on a shared drive. AI products are the latest version of this pattern.</p>

<p>AI-generated clones are genuinely useful for validating a hypothesis. The mistake is letting the prototype become the product. The moment you’re the one pulling the levers every week, you’ve become that Access database—everyone knows it’s “not great,” but nobody wants to touch it.</p>

<h3 id="security-theater">Security Theater</h3>

<p>One of the objectives of this experiment was to stop “passing” our Slack bot token to a third-party solution. I do think there are serious discussions to be had about a <a href="https://www.jsrowe.com/linkedin-article-whats-in-your-software/">software bill of materials</a>, but any compromise of <code class="language-plaintext highlighter-rouge">rtCamp/action-slack-notify</code> is already circumvented by our use of pinned versions and the eyeballs of everyone else who uses the product. Also, our configuration of the Slack app/webhooks is another layer of defense—that is, rewriting the connection utility is not the weakest link of this software. The security argument for bringing software internal is real, but often overstated.</p>

<h2 id="when-cloning-might-make-sense">When Cloning Might Make Sense</h2>

<p>To be clear, the problem is using AI to build the wrong thing. While doing this work, I did think of a couple of use cases where cloning with AI might make sense:</p>

<ol>
  <li>Language ports where the utility doesn’t exist in the target language. Cloning libraries from Scala-&gt;Kotlin for example.</li>
  <li>Throwaway proof of concepts/prototype work, where the value is in validating ideas, not generating code.</li>
</ol>

<p>Generating code is the easy part. What comes with it is the ownership.</p>

<h2 id="differentiation-is-still-the-game">Differentiation Is Still the Game</h2>

<p>Nobody opens a burger joint to out-McDonald’s the Big Mac. They open it because they have something different to say about burgers. Execution and implementation hours are still scarce resources. Every sprint spent maintaining an untested, unproven AI clone is a sprint not spent on what grows your business.</p>

<p>Chances are cloning software doesn’t address your core KPIs/OKRs. Growing companies don’t ask, “What can we copy?” They ask, “What do we need to build to grow our top line?” Use AI to accelerate the projects that move your business forward, not to catch up to what your competitors buy off the shelf.</p>

<p>Nobody raises a series B on a cost-cutting strategy. AI doesn’t change the build vs. buy calculus; it just makes it easier to fool yourself into thinking the constraint was writing code. It wasn’t. It was always about picking the right thing to build and having the discipline to say no to everything else.</p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Mar 1st, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid BB50DCC1-F53F-4CB4-AC1F-1C5CBFD2F2E8</li>
  <li>Dec 22nd, 2025 Initial draft created.</li>
</ul>

<p><strong>Footnotes</strong></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:models">
      <p>This clone was built using Cursor, OpenAI GPT5.1, and Claude Sonnet 4.5. <a href="#fnref:models" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[Twenty-five years ago, Joel Spolsky wrote that the single worst strategic mistake any software company can make is rewriting the product from scratch. Now, with AI, the prevailing opinion is that companies will clone SaaS products internally, displacing the market.]]></summary></entry><entry><title type="html">Why AI Demands New Engineering Ratios</title><link href="https://jsr6720.github.io/ai-team-ratios/" rel="alternate" type="text/html" title="Why AI Demands New Engineering Ratios" /><published>2026-02-04T02:46:48+00:00</published><updated>2026-02-04T02:46:48+00:00</updated><id>https://jsr6720.github.io/ai-team-ratios</id><content type="html" xml:base="https://jsr6720.github.io/ai-team-ratios/"><![CDATA[<p>The two-pizza team isn’t dead—it just needs different toppings. AI breaks previous staffing assumptions by collapsing the time it takes to write code. Engineers equipped with AI can transition coding tasks from <em>ready for dev</em> to <em>ready for test</em> in hours, not days. So how best to leverage this new capacity?</p>

<p>In chemistry, when you increase one reagent without rebalancing others, you don’t get more product: You get waste.</p>

<p>Without new team ratios to capture the efficiencies in writing code with AI, teams waste capacity on low-complexity work—solutions that AI is quite efficient with—when the real opportunity is in rebalancing toward high-impact projects instead of chasing the long tail of backlog items.</p>

<h2 id="the-discipline-problem-parkinsons-law-meets-the-8020-rule">The Discipline Problem: Parkinson’s Law Meets the 80/20 Rule</h2>

<p>Engineering organizations have evolved each time the <em>way</em> software is built changes. It’s happening again with AI. Here’s the lesson: When you can build faster, product discipline and business domain expertise become the critical differentiators to organizational execution.</p>

<p>Parkinson’s Law dictates that the work will expand to fill the time. The 80/20 rule teaches us that 80 percent of results are achieved by 20 percent of inputs. The added capacity from faster code generation must be intentionally reallocated to the 20 percent that delivers results for the organization.</p>

<p>The limiting reagent problem isn’t just about chemistry—it’s a strategic choice. Will you rebalance your teams to reflect the new tools of building software, or will inertia redirect freed capacity toward work that’s on the backlog but not good enough to have been staffed?</p>

<h2 id="the-waste-coding-faster-delivering-less">The Waste: Coding Faster, Delivering Less</h2>

<p><img src="/assets/posts-images/limiting-constraint.png" alt="Capacity constraint" style="border-radius: 4px; display: block; margin: 0 auto;" /></p>

<p>Engineers are coding faster with AI, but team capacity constraints are shifting to the other parts of the SDLC: discovery, design, testing, and release activities.</p>

<p>Who cares if AI can autonomously clear all your to-do items? They were sitting on your backlog and weren’t important enough to prioritize before AI. The organizations that win with AI won’t be the ones writing code the fastest—they’ll be the ones finding the right code to write.</p>

<h2 id="eras-of-engineering-org-charts">Eras of Engineering Org Charts</h2>

<p>Understanding why AI demands new team ratios requires understanding how engineering org structures have always been a function of how code gets written. Each approach below was optimized for its era’s respective limiting reagent, from teams of technical experts in the Waterfall era, to cross-functional teams in the Agile era, and now to business domain expertise in the AI era.</p>

<h3 id="waterfall-era-pre-agile-manifesto-1990s-2010s">Waterfall Era: Pre-Agile Manifesto (1990s-2010s)</h3>

<p>Staffing in this era was optimized for technical experts in syntax and technologies. Systems were often monoliths a single engineer could fully comprehend. Long, detailed requirements docs were thrown over the wall to teams of programmers who then threw solutions over the wall to testing in a waterfall fashion. Teams of this era were organized as clusters of titles (developer, tester, analysts, et cetera).</p>

<h3 id="agile-era-the-cross-functional-team-2010s-2020s">Agile Era: The Cross-Functional Team (2010s-2020s)</h3>

<p>Rigid years-long project planning gave way to iterative sprints, siloed specialists gave way to cross-functional teams of “generalizing specialists,”<sup id="fnref:generalizing"><a href="#fn:generalizing" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> and a magic “two-pizza team” ratio emerged: 5-7 engineers, 1-2 QEs, 1 PM.</p>

<p><em>Ready for dev</em> to <em>ready for test</em> could take 5-8 business days of a two-week sprint. Much of that time was consumed by the mechanical work of implementing solutions in code—a process that AI is dramatically accelerating.</p>

<p>Was writing code <em>the</em> bottleneck? Not for every team, but it was a bottleneck significant enough to justify staffing 5-7 engineers per product manager. If writing code wasn’t a bottleneck, why was it staffed so heavily here?</p>

<h3 id="ai-era-the-post-agile-rebalancing-2026">AI Era: The Post-Agile Rebalancing (2026+)</h3>

<p>AI is increasing the availability of “code” the way a catalyst changes reaction speed, shifting the limiting reagent of software teams to defining and shipping the <em>right</em> work. Software teams need systems experts who understand the business objectives as deeply as they do products and operations.</p>

<p>Ideal product-engineering-testing ratios are actively being reconfigured in our industry. What’s clear to me is the Agile-era assumptions about team composition no longer hold when writing code collapses from days to hours.</p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Feb 4th, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid 2E131853-3267-481B-A377-FA08B9C9FA08</li>
  <li>Jan 7th, 2026 Initial draft created.</li>
</ul>

<p><strong>Footnotes</strong></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:generalizing">
      <p>In practice, deep expertise in one area consistently outperforms rapid context switching between domains and tech stacks. As <em>validation</em> of AI outputs becomes critical, so too does deep domain knowledge. <a href="#fnref:generalizing" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[The two-pizza team isn’t dead—it just needs different toppings. AI breaks previous staffing assumptions by collapsing the time it takes to write code. Engineers equipped with AI can transition coding tasks from ready for dev to ready for test in hours, not days. So how best to leverage this new capacity?]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jsr6720.github.io/assets/posts-images/limiting-constraint.png" /><media:content medium="image" url="https://jsr6720.github.io/assets/posts-images/limiting-constraint.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">2025 AI Wrapped: A Year in Review</title><link href="https://jsr6720.github.io/ai-wrapped-series/" rel="alternate" type="text/html" title="2025 AI Wrapped: A Year in Review" /><published>2026-01-21T12:00:00+00:00</published><updated>2026-01-21T12:00:00+00:00</updated><id>https://jsr6720.github.io/ai-wrapped-series</id><content type="html" xml:base="https://jsr6720.github.io/ai-wrapped-series/"><![CDATA[<p>Frontier conversational large language models (LLMs) are the closest implementation of the fictional Star Trek <a href="https://en.wikipedia.org/wiki/LCARS">“Computer”</a> I’ve yet to experience. I compare my journey with AI to the evolution of CPUs to GPUs.</p>

<p>Before AI, like CPUs, my work was linear and sequential. With AI—like GPUs— I can build and research concurrently while iterating on one idea, processing and triangulating multiple sub-tasks across multiple vectors with very little context-switching cost.</p>

<p><strong>AI can make you faster, but it doesn’t guarantee you’ll be more effective.</strong></p>

<p>But AI does not add hours to the day, eliminate opportunity costs, or change how humans work together. Business principles set out by Drucker, Grove, and <a href="/bookshelf">other authors</a> focus on finding wedges that grow future results—not “how much code you write.”</p>

<h2 id="the-series">The Series</h2>

<p>This series is a field report from the front lines: what it looks like to go all-in on AI as an engineering leader, what I’ve shipped, what didn’t work, and what matters for the future.</p>

<ol>
  <li><a href="/ai-wrapped-evolution-of-using-ai-every-day/">Evolution of Using AI Every Day</a></li>
  <li><a href="/ai-wrapped-my-setup-for-programming/">My Setup for Programming</a></li>
  <li><a href="/ai-wrapped-what-ive-shipped-with-ai/">What I’ve Shipped with AI</a></li>
  <li><a href="/ai-wrapped-what-hasnt-worked/">What Hasn’t Worked</a></li>
  <li><a href="/ai-wrapped-thinking-with-ai/">Thinking with AI</a></li>
</ol>

<p>Bonus Content: <a href="/ai-predictions-for-2026/">AI Predictions for 2026</a> and <a href="/does-anyone-know-a-good-software-engineer/">Does Anyone Know a Good Software Engineer?</a></p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Jan 21st, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid A81EA5B4-578F-4A0A-A79D-4773DEBC0A51</li>
  <li>Dec 16th, 2025 Initial rough draft created.</li>
</ul>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[Frontier conversational large language models (LLMs) are the closest implementation of the fictional Star Trek “Computer” I’ve yet to experience. I compare my journey with AI to the evolution of CPUs to GPUs.]]></summary></entry><entry><title type="html">2025 AI Wrapped: Thinking with AI</title><link href="https://jsr6720.github.io/ai-wrapped-thinking-with-ai/" rel="alternate" type="text/html" title="2025 AI Wrapped: Thinking with AI" /><published>2026-01-21T11:05:00+00:00</published><updated>2026-01-21T11:05:00+00:00</updated><id>https://jsr6720.github.io/ai-wrapped-thinking-with-ai</id><content type="html" xml:base="https://jsr6720.github.io/ai-wrapped-thinking-with-ai/"><![CDATA[<p><em>This is Part 5 of 5 of my <a href="/ai-wrapped-series/">2025 AI Wrapped</a> series. This post covers key lessons for myself and how I think with AI.</em></p>

<p>AI isn’t just a code-writer. It’s an intern analyst—one efficient at finding connections across massive datasets, surfacing relevant research, and compiling baseline information. But just like any intern, it requires verification. When AI suggests something, that’s the start of an investigation, not the conclusion.</p>

<p>“But using AI makes you dumber,” they say. Counterpoint: Widespread adoption of GPS means a whole generation of people can get to their destination faster without paper maps. Thinking <em>with</em> AI has surfaced real arXiv research papers and McKinsey publications that were invisible to me with <a href="link to Information Discovery with AI">traditional search</a> and exposed me to entirely new concepts.</p>

<p>AI doesn’t make me smarter; it makes <a href="https://en.wikipedia.org/wiki/Socratic_method">conversational exploration</a> of topics and ideas possible. This conversational approach allows me to mind-map my existing knowledge onto new topics. So, no, I don’t feel that AI has changed my approach to curiosity or learning. What it has helped me do is connect my curiosity to primary sources.<sup id="fnref:people"><a href="#fn:people" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>

<p>Whereas in the past I would use Wikipedia or traverse an ever-worsening search results page, now I can converse with AI and instantly get a reasonable baseline of information. Understanding <a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/">how generative LLMs work</a> is critical to understanding that AI doesn’t <em>know</em> anything. It generates the statistically most likely response based on the source material in its training data; even so, I’ve found this generated baseline information reliable and more correct than not this past year.</p>

<p>I acknowledge that every AI company has plundered humanity’s copyrighted knowledge, and that’s worth reviewing. This series is an observation that generative LLMs are here and act as prisms through which we view our world. They <a href="https://www.nature.com/articles/s41598-021-89743-x">reveal patterns</a> that weren’t previously observed, <a href="https://www.technologyreview.com/2025/12/04/1128763/ai-geothermal-zanskar/">make connections</a> not previously made.</p>

<p>AI doesn’t replace your thinking; it provides another lens through which to view the world. And just like any lens, sometimes you gain more clarity by inspecting what’s in front of you without it.</p>

<p>When I look back on 2025, I see that AI made programming viable for engineering managers again. But more than that, AI has evolved past the limitations that made skepticism reasonable—context windows have expanded, responses have become reliable, sources have become verifiable.</p>

<p>If you’re still citing studies that AI doesn’t boost productivity,<sup id="fnref:gdp"><a href="#fn:gdp" class="footnote" rel="footnote" role="doc-noteref">2</a></sup> consider this: computers replaced NASA’s human calculators, CAD replaced human drafters, Excel replaced human ledger scribes. But scientists, architects, and accountants are still highly trained, skilled professionals. The tools didn’t eliminate judgment; they shifted where professionals spend their cognitive energy.</p>

<p>AI doesn’t change what engineering is—it changes where engineers spend their time.</p>

<p>We’re still in the early days of this transformation. The knowledge workers who integrate AI into their work will compound their effectiveness the same way previous generations did by mastering new technologies. AI is the next tool in that progression.</p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Jan 21st, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid 61BBBCC3-9C95-4378-A540-86C18FE9BC78</li>
  <li>Dec 16th, 2025 Initial rough draft created.</li>
</ul>

<p><strong>Footnotes</strong></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:people">
      <p>You don’t need domain experts to explain basic concepts. Starting with AI means that conversations with people have more depth when they do start. And besides, not everyone is available all the time. <a href="#fnref:people" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:gdp">
      <p>GDP and productivity gains from previous tool transitions didn’t show up until companies fundamentally changed how they accomplished work. <a href="#fnref:gdp" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[This is Part 5 of 5 of my 2025 AI Wrapped series. This post covers key lessons for myself and how I think with AI.]]></summary></entry><entry><title type="html">2025 AI Wrapped: What Hasn’t Worked</title><link href="https://jsr6720.github.io/ai-wrapped-what-hasnt-worked/" rel="alternate" type="text/html" title="2025 AI Wrapped: What Hasn’t Worked" /><published>2026-01-21T11:04:00+00:00</published><updated>2026-01-21T11:04:00+00:00</updated><id>https://jsr6720.github.io/ai-wrapped-what-hasnt-worked</id><content type="html" xml:base="https://jsr6720.github.io/ai-wrapped-what-hasnt-worked/"><![CDATA[<p><em>This is Part 4 of 5 of my <a href="/ai-wrapped-series/">2025 AI Wrapped</a> series. This post covers all the ways that AI fails me daily—and why despite the shortcomings I’m still all-in on AI.</em></p>

<p>For all of the successes I’ve experienced working with AI, there are <em>countless</em> daily failures that reassure me <a href="/does-anyone-know-a-good-software-engineer/">professional software engineers</a> aren’t going anywhere soon. Not a day goes by when working with AI doesn’t nearly send me over the edge. Here is a collection of #fails from the trenches.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/syndrome-dense-mf.png" alt="You Dense Motherfucker -Syndrome" style="border-radius: 4px; display: block; margin: 0 auto;" /></p>

<h2 id="cursor-deletes-my-home-directory">Cursor Deletes My Home Directory</h2>

<p>First, I am <a href="https://www.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cli_deleted_my_entire_home_directory_wiped/">a victim of force directory deletes</a>. While working in a project, I instructed Cursor to “clean up the files in X directory.”</p>

<p>Cursor decided that “cleaning up” meant running <code class="language-plaintext highlighter-rouge">rm -rf ~/</code>. I watched in horror as my directories started disappearing one by one in Finder. Only by force rebooting my machine did I terminate this process and save my home directory.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/cursor-rm-command.png" alt="Cursor runs rf -rf ~/" style="border-radius: 4px;" /></p>

<p>But hey, at least it apologized…</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/cursor-rm-command-apology.png" alt="Cursor apologies for deleting my home directory" style="border-radius: 4px;" /></p>

<p>I cannot stress enough: I had <code class="language-plaintext highlighter-rouge">rm</code> in my prohibited commands. Cursor ran it anyways. Now I am much more specific when I ask AI to “clean up files,” and I perform file deletes myself.<sup id="fnref:date"><a href="#fn:date" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>

<h3 id="fighting-for-cursor-control-in-ide">Fighting for Cursor Control in IDE</h3>

<p>When trying to use hotkeys or, especially, tabbing in my files, if I have too many agents enabled, pressing tab in Cursor can spool out a crap-ton of irrelevant code very quickly.</p>

<p>I call this “fighting for control” because it can feel like I’ve lost control of my IDE—especially if there is a CLI spooling in the background making changes to the file.</p>

<p>Now, before manually editing files, I make sure all agents are done running.</p>

<h2 id="ai-still-writes-broken-code">AI Still Writes Broken Code</h2>

<p>AI still generates syntax errors at least 5 percent of the time. This isn’t just missing curly brackets; it could be import statements, module definitions, or runtime errors. At the start of 2025, I felt there was a fifty-fifty chance of AI generating “working” code. Now, I am about 95 percent confident I’ll get “working” code—but here are some examples of how AI still generates bad code daily.<sup id="fnref:build"><a href="#fn:build" class="footnote" rel="footnote" role="doc-noteref">2</a></sup></p>

<h3 id="package-and-version-hallucinations">Package and Version Hallucinations</h3>

<p>AI still hallucinates software packages and functions that just don’t exist. Sometimes it invents entirely fictional package names; other times it references real packages but hallucinates functions or version numbers—especially if a point version of a package is unpublished or a major update is released after AI training data window cutoffs.</p>

<h3 id="syntax-errors-and-duplicate-functions">Syntax Errors and Duplicate Functions</h3>

<p>The most common syntax error was the <a href="/assets/posts-images/2025-ai-wrapped-series/claude-code-misses-curly-bracket.png">missing curly bracket</a>. But just like humans, this is often a symptom of sloppy function writing and almost never just a mismatched <code class="language-plaintext highlighter-rouge">{}</code>.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/syntax-error.png" alt="Syntax Errors" style="border-radius: 4px;" /></p>

<p>AI seems to have no qualms adding <a href="/assets/posts-images/2025-ai-wrapped-series/claude-code-duplicates-function.png">duplicate named functions</a>. You can see in this example that a second <code class="language-plaintext highlighter-rouge">layoutDifficultyLable()</code> function has been added, resulting in a compiler error.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/codex-duplicates-functions.png" alt="Syntax Errors" style="border-radius: 4px;" /></p>

<h3 id="file-size-and-directory-mismanagement">File Size and Directory Mismanagement</h3>

<p>First, AI has no problem working with and writing out files of 1,000 lines or more. Files that are incomprehensible to a human. Second, even if directed to work in a specific directory, AI will sometimes create similar directories or a “second” file of the same content. I tell myself this is from years of training data including filenames like <code class="language-plaintext highlighter-rouge">list.v2.bak.final.pdf</code>.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/ai-creates-second-file.png" alt="Syntax Errors" style="border-radius: 4px;" /></p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/ai-creates-second-file-2.png" alt="Syntax Errors" style="border-radius: 4px;" /></p>

<p>I wish I had saved a screenshot of the directory issues, but the one I remember is a “Utilities” folder being created twice, one with an incorrect spelling.</p>

<h2 id="the-myth-of-the-perfect-spec">The Myth of the Perfect Spec</h2>

<p>Software development is still iterative. One-shot solutions fly in the face of everything I understand about building software. I am not sold on “spec-driven” solutions, especially ones that rely on “perfect” requirements, as people are famously bad at articulating <a href="https://i.kym-cdn.com/photos/images/original/000/475/749/fd8.jpg">what they want</a>.</p>

<p>I experimented briefly with blitzy.com, Lovable, Figma, and Claude Artifacts,<sup id="fnref:artifacts"><a href="#fn:artifacts" class="footnote" rel="footnote" role="doc-noteref">3</a></sup> and I concede they can be used for communicating vision through prototypes, but I was unable to deliver any solutions with them. All of my <a href="/ai-wrapped-what-ive-shipped-with-ai/">shipped solutions</a> were built with Cursor, Claude Code, and Codex.</p>

<h3 id="cloud-agents-kill-momentum">Cloud Agents Kill Momentum</h3>

<p>Cloud agents promise to offload work, but they do poorly what you can already do locally—<a href="/ai-wrapped-my-setup-for-programming/">run concurrent AI sessions</a>. Dispatched cloud-based “agents” suffer from the same limitations of trying to one-shot a solution. They provide no visibility into their execution, I can’t interrupt them if they’re off task, and worse, no matter what they generate, I <em>still</em> have to pull the code down locally and verify functionality.</p>

<p>Here are some of the ones I tried:</p>
<ul>
  <li><a href="https://jules.google">Google Jules</a></li>
  <li>MSFT <a href="https://github.blog/changelog/2025-09-30-start-your-new-repository-with-copilot-coding-agent/">GitHub Copilot</a> new repository setup</li>
  <li><a href="https://cursor.com/blog/cloud-agents">Cursor Agents</a></li>
  <li><a href="https://sentry.io/product/seer/">Sentry Seer</a></li>
</ul>

<p>These AI agents require you to identify which repo to make the change in, which doesn’t scale for complex multi-repo projects. The GitHub “start your new repository with a Copilot prompt” is particularly egregious as it takes at least 10 or 15 minutes to complete and has never produced anything of value to me. Every time I’ve created a repo, I already had a working local solution I wanted to import.</p>

<h3 id="simple-prompts-dont-solve-complex-problems">Simple Prompts Don’t Solve Complex Problems</h3>

<p>In complex systems, there are a multitude of ways that problems can manifest and a myriad of solutions to solve them. Vague prompting results in generic solutions such as increasing error handling without regard to the broader system context.</p>

<p>The lesson: AI is only as good as the context you give it. If you want sophisticated solutions, you need a solid understanding of system architecture and software engineering best practices.</p>

<h2 id="why-im-still-all-in-on-ai">Why I’m Still All-In on AI</h2>

<p>Despite the deleted directories, hallucinated packages, and daily syntax/runtime errors, I’m still all-in on AI.</p>

<p>Why? Because the math works.</p>

<p>Yes, AI fails every day. Yes, I spend time fixing syntax and runtime errors, validating generated code, cleaning up bloated functions. Yes, every tool promising “autonomous solutions” has disappointed me.</p>

<p>But programming is <em>fun</em> again. I can scale up and ship meaningful changes quickly. I’m more productive with AI than without. AI lets me make leveraged bets with my time that otherwise wouldn’t make sense. Projects that were too tedious to start are now worth pursuing.</p>

<p>The key lesson I’ve learned throughout 2025 is that AI is like a <a href="/does-anyone-know-a-good-software-engineer/">power tool</a>. Wielded responsibly, it’s a force multiplier of epic magnitude—for <em>writing</em> code. Just like power tools let carpenters focus on building houses instead of hand-ripping boards, AI lets engineers focus on building software instead of writing boilerplate.</p>

<p>The failures documented here aren’t bugs—they’re the cost of working with power tools.</p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Jan 21st, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid 6313209A-66CB-4710-A14B-96C3FEEA6D62</li>
  <li>Dec 16th, 2025 Initial rough draft created.</li>
</ul>

<p><strong>Footnotes</strong></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:date">
      <p>Cursor deleting my home directory occurred in October 2025. <a href="#fnref:date" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:build">
      <p>I tend to run build commands myself because feeding build outputs back to AI consumes my limited tokens and context windows. But, when I hit a build/runtime error, I’ll copy/paste it directly into the prompt with no other instructions, and AI tends to fix it within a minute or so. <a href="#fnref:build" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:artifacts">
      <p>With regard to <a href="https://www.claude.com/blog/artifacts">Claude Artifacts</a>, I spent so much time waiting for Claude to regenerate the entire solution that it became unusable for any kind of iteration. <a href="#fnref:artifacts" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[This is Part 4 of 5 of my 2025 AI Wrapped series. This post covers all the ways that AI fails me daily—and why despite the shortcomings I’m still all-in on AI.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jsr6720.github.io/assets/posts-images/2025-ai-wrapped-series/syndrome-dense-mf.png" /><media:content medium="image" url="https://jsr6720.github.io/assets/posts-images/2025-ai-wrapped-series/syndrome-dense-mf.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">2025 AI Wrapped: What I’ve Shipped with AI</title><link href="https://jsr6720.github.io/ai-wrapped-what-ive-shipped-with-ai/" rel="alternate" type="text/html" title="2025 AI Wrapped: What I’ve Shipped with AI" /><published>2026-01-21T11:03:00+00:00</published><updated>2026-01-21T11:03:00+00:00</updated><id>https://jsr6720.github.io/ai-wrapped-what-ive-shipped-with-ai</id><content type="html" xml:base="https://jsr6720.github.io/ai-wrapped-what-ive-shipped-with-ai/"><![CDATA[<p><em>This is Part 3 of 5 of my <a href="/ai-wrapped-series/">2025 AI Wrapped</a> series. This post covers what I’ve shipped with 100% AI-generated code. When I first started reflecting on building with AI, this was the first draft. Writing it became the genesis for all the other posts in this series.</em></p>

<p>I shipped 20+ features and tools in 2025; before AI, this would have been 1-3 max. Not only did AI generate 100 percent of the code, it helped me distill documentation and ideas into action, and fit iterations into 10-20-minute blocks of time instead of hours.</p>

<p>The examples below include simple changes to existing systems, greenfield tools, product prototypes, and bug fixes. Each example took iteration, testing, validation, and refactoring to ship, mirroring how software gets built today, just with less time spent writing every line of code.</p>

<p>A pattern emerged with each example: AI feels lightning-fast for the first day or two, then slowly degrades into the traditional trappings of building software: edge cases, integration complexity, and acceptance testing. AI compresses development time, not delivery time.</p>

<h2 id="agent-slash-commands-and-reusable-prompts">Agent Slash Commands and Reusable Prompts</h2>

<p>I was first exposed to this idea by the agents themselves. On startup Codex/Claude Code will suggest common slash commands available within the current working directory.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/claude-code-slash-commands.png" alt="Claude Code slash commands in terminal" style="border-radius: 4px;" /></p>

<p>I’ve found it most effective to use a centralized “prompt library” to share common prompts across all AI tools. This collection of markdown files is in my path at <code class="language-plaintext highlighter-rouge">~/code/prompt-library/&lt;command&gt;.md</code> and is available to any project I work with.</p>

<h3 id="command-refactor-refactoring-ai-bloat-and-making-human-readable-files">Command: <code class="language-plaintext highlighter-rouge">/refactor</code> Refactoring AI Bloat and Making Human-Readable Files</h3>

<p>I use this reusable prompt the most because AI frequently creates thousand-line files that are completely indecipherable to the human eye. Or it will duplicate entire functions across files and violate many software best practices like DRY.</p>

<p>Slash command <code class="language-plaintext highlighter-rouge">/refactor</code> fixes these anti-patterns. I have been impressed that AI can refactor large blocks of code without breaking anything, but I wish it didn’t write crap in the first place. Since the cost of refactoring approaches zero (just the tokens/time), I don’t worry so much about AI-generated code until either it’s time to submit it for review or the effectiveness of working with the files goes down.</p>

<p>Here’s some examples:</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/claude-code-refactor.png" alt="Claude Code /refactor prompt" style="border-radius: 4px;" /></p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/codex-refactor-targets.png" alt="Codex /refactor prompt" style="border-radius: 4px;" /></p>

<h3 id="command-document-creates-a-markdown-file-of-the-days-activities">Command: <code class="language-plaintext highlighter-rouge">/document</code> Creates a Markdown File of the Day’s Activities</h3>

<p>To capture my work for the day, I run <code class="language-plaintext highlighter-rouge">/document</code>, which builds a natural-language change log summary, documents any architectural decisions I’ve made, and will include relevant prompts as quotes. I can use this output as reference—or, more frequently, when I come back to the project, I can effectively pick up where I left off.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/slash-command-document.png" alt="/document prompt" style="border-radius: 4px;" /></p>

<h3 id="command-feature-summarizes-discussion-into-a-md">Command: <code class="language-plaintext highlighter-rouge">/feature</code> Summarizes Discussion into a <feature>.md</feature></h3>

<p>One of the most seductive (and, frankly, distracting) things about working with AI is the ability to immediately gratify the impulse to <a href="https://simonwillison.net/2023/Mar/27/ai-enhanced-development/">add scope</a> to a solution. I’ve frequently thought, “One more prompt and I’ll have something even better!”</p>

<p>The <code class="language-plaintext highlighter-rouge">/feature</code> command allows me to bring discipline and focus to my work. It will take the chain of thought from the discussion and persist it to a <code class="language-plaintext highlighter-rouge">&lt;feature&gt;.md</code> file for future reference. In my mind, this is an alias for the favorite expression of software teams: “We’ll add it to the backlog.”</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/slash-command-feature.png" alt="/feature prompt" style="border-radius: 4px;" /></p>

<h3 id="command-onboard-ai-guided-onboarding-and-computer-setup">Command: <code class="language-plaintext highlighter-rouge">/onboard</code> AI-Guided Onboarding and Computer Setup</h3>

<p><code class="language-plaintext highlighter-rouge">/onboard</code> was the first slash command I made that was used by other engineers. The command onboards any workstation with the binaries needed to start feature development. There are no scripts, only a top-level README.md with prompts to install each tool in the <code class="language-plaintext highlighter-rouge">/tools</code> directory. So <code class="language-plaintext highlighter-rouge">git</code>, <code class="language-plaintext highlighter-rouge">aws</code>, <code class="language-plaintext highlighter-rouge">docker</code>, et cetera are in dedicated prompt-based markdown files with guidance on what the success criteria are.</p>

<p>In the past, it might take 1-2 days to get a personal machine set up for work, with a combination of a rigid IT script, stale wiki pages, equipment drift, and at least a few hours of reading docs everywhere. Now, engineers can clone the <code class="language-plaintext highlighter-rouge">onboarding</code> repo and run <code class="language-plaintext highlighter-rouge">/onboard</code> with <em>any</em> AI. This cuts machine setup from 1-2 days to less than an hour. Some engineers have used Claude Code where others used Cursor! Very positive feedback on this.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/ai-onboarding-command.png" alt="/onboard command" style="border-radius: 4px;" /></p>

<h2 id="professional-projects">Professional Projects</h2>

<p>With respect to my employer, I’m only highlighting some generic concepts that I’ve completed. I took on these tasks both to evaluate AI effectiveness and to <a href="https://www.amazon.com/gp/product/1662966377/">remove friction</a> from my teams.</p>

<p>The biggest result of using AI is that I can navigate projects in a way that would’ve required engineering-led deep dives in the past. I still talk with engineers to deepen my understanding, but I’m no longer dependent on them to orient myself in the code.</p>

<p>One thing important to me as an engineering manager is leading with credibility by understanding how my team executes. Even with AI, I am never picking up a critical path of work or pretending I can do something better than the engineers on the project. That would be absurd.</p>

<h3 id="migrating-builds-to-new-standardized-build-platform">Migrating Builds to New Standardized Build Platform</h3>

<p>Like a lot of migration projects, we had a long tail of builds that needed to be migrated to GitHub actions. With AI, I was able to migrate 2-3 smaller projects; this not only built momentum, it exposed me to new domain areas and was an opportunity to pitch in without slowing down the team.</p>

<p>In the past, I wouldn’t have picked up this work because I would be a brain drain on engineering. But with AI, I was able to get the projects set up and migrated, learning along the way by using AI to understand the code base. My work still required engineering review and testing before merging.</p>

<h3 id="bringing-internal-a-github-action">Bringing Internal a GitHub Action</h3>

<p>I had read in a few Reddit posts that “AI can clone anything.” So I decided to test this hypothesis by “cloning” <a href="https://github.com/rtCamp/action-slack-notify">rtCamp/action-slack-notify</a> internally. We were using a variety of mechanisms to connect GitHub Actions to our Slack instance, and I saw an opportunity to test AI’s ability to clone a library.</p>

<p>The takeaway here is that it’s just not worth it, nor was “cloning” easy. While I was able to engineer a solution that has moved us away from webhooks and toward using Slack Apps, there was a lot more to this work than “Hey, clone this project.” I can attest that engineers should still reach for things “off the shelf” rather than rebuild them even if AI gets to the point of “instant” clone. I have a deeper dive planned for a future post: “The AI Clone Fallacy”.</p>

<h3 id="temporary-project-status-dashboard">Temporary Project Status Dashboard</h3>

<p>I think one of the best uses of AI-generated code is creating throwaway solutions. For one critical project, we had dozens of engineers and PRs rapidly converging toward one due date.</p>

<p>Rather than try and track all of this through Jira and GitHub, I spun up a disposable dashboard that tracked open issues and surfaced the ones that needed review to make sure none were missed prior to launch.</p>

<h3 id="feature-flag-override-detection">Feature Flag Override Detection</h3>

<p>I built a simple scheduled GitHub action that scans specific configurations and looks for ones that are past-dated, indicating they can be removed. Before AI, this type of work would never take priority over feature development. I knocked this script out between meetings during a regular workweek.</p>

<h3 id="inspecting-systems-and-debugging-stack-traces">Inspecting Systems and Debugging Stack Traces</h3>

<p>I use AI nearly every day to <a href="https://www.thoughtworks.com/radar/techniques/using-genai-to-understand-legacy-codebases">navigate code bases</a> and how they fit into the larger picture. Sometimes it’s as simple as “what does this project do” or “where do these queues come from.”</p>

<p>For debugging, same approach: One particularly memorable example was combining a thousand-line XML file, a thirty-page vendor doc, the code repository, and a bug alert. AI quickly identified where the problem was occurring, down to the specific XML data node, but was completely <em>wrong</em> about how to fix it. While AI made the right connections, I still needed engineers to make the decision of <em>what</em> to do about it.</p>

<h2 id="personal-projects">Personal Projects</h2>

<p>Software is my profession. But a shipped project is more than just good software. It’s design, product market fit, and all of the other heuristics that go into successful delivery. For solo projects, AI really adds polish I never had capacity for before and maximizes my very limited evening hours.</p>

<h3 id="duly-noted-a-microblog-of-encountered-ideas">Duly Noted: A Microblog of Encountered Ideas</h3>

<p>Where I’ve seen AI help me the most is in cutting the setup cost on my personal projects to hours from days. Something like <a href="https://noted.jsrowe.com/about">https://noted.jsrowe.com/about</a> is a fun project that combines AWS API Lambda functions, and Python scripts.</p>

<p>The research and development needed to set this up had eluded me in the past. But with AI stringing together AWS docs, a few Python scripts, and an idea, I can bootstrap a solution that works for me. I estimate that what would’ve been an all-weekend project (i.e., not doable) was instead fit into a few hours before bedtime over the course of a week.</p>

<h3 id="just-cat-mazes-an-ios-app">Just Cat Mazes: An iOS App</h3>

<p>My kids know I build software, so they always ask, “Why can’t you build us an iPad game?!” In the past, researching Swift iOS development and maze-building algorithms and navigating App Store listings were obstacles. With AI, I was able to build <a href="https://apps.apple.com/us/app/just-cat-mazes/id6755058163">Just Cat Mazes</a> in a few weekends. Even the icon was drawn by my daughter and upscaled by AI.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/app-store-just-cat-mazes.png" alt="Just Cat Mazes App Store listing" style="border-radius: 4px;" /></p>

<h3 id="rowe-innovations-llc-hugo-theme">Rowe Innovations, LLC, Hugo Theme</h3>

<p>Another area where AI helps an engineer like me is in basic design principles. Most of my history with personal websites has been downloading a theme and making some light CSS tweaks to it. With AI, I’m able to theme a simple <a href="https://roweinnovations.com">business-card site</a> with Tailwind.<sup id="fnref:tailwind"><a href="#fn:tailwind" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>

<p>I’m not replacing agency work; quotes I’ve solicited for this work came in at $3-5k. I’d never pay that for a static site; I would’ve used a free/paid template. AI is letting me reach further as a solo builder, just as WordPress and Bootstrap made building sites about the content, not the setup.</p>

<h2 id="faster-iterations-same-total-effort">Faster Iterations, Same Total Effort</h2>

<p>I’ve shipped nearly 10× what I could have before AI. But each project still took the same total effort to ship once I account for testing/debugging, integration, and organizational adoption. Writing code felt frictionless, but getting it ready to release took the same amount of calendar time as before AI.</p>

<p>AI didn’t tell me which 20 things to build. Finding the right place to apply your efforts and create leverage is still a human judgment.</p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Jan 21st, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid D8E91BD2-07E2-451B-8E09-939F822604E7</li>
  <li>Dec 16th, 2025 Initial rough draft created.</li>
</ul>

<p><strong>Footnotes</strong></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:tailwind">
      <p>I wrote this before the news that LLMs are <a href="https://github.com/tailwindlabs/tailwindcss.com/pull/2388#issuecomment-3717222957">breaking Tailwind’s business model</a>. I didn’t even know they had a paid product. I guess I was part of the problem. <a href="#fnref:tailwind" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[This is Part 3 of 5 of my 2025 AI Wrapped series. This post covers what I’ve shipped with 100% AI-generated code. When I first started reflecting on building with AI, this was the first draft. Writing it became the genesis for all the other posts in this series.]]></summary></entry><entry><title type="html">2025 AI Wrapped: My Setup for Programming</title><link href="https://jsr6720.github.io/ai-wrapped-my-setup-for-programming/" rel="alternate" type="text/html" title="2025 AI Wrapped: My Setup for Programming" /><published>2026-01-21T11:02:00+00:00</published><updated>2026-01-21T11:02:00+00:00</updated><id>https://jsr6720.github.io/ai-wrapped-my-setup-for-programming</id><content type="html" xml:base="https://jsr6720.github.io/ai-wrapped-my-setup-for-programming/"><![CDATA[<p><em>This is Part 2 of 5 of my <a href="/ai-wrapped-series/">2025 AI Wrapped</a> series. This post covers my actual development environment setup—the specific tools and workflows I use daily to ship production software with nearly 100% AI-generated code.</em></p>

<p>When I “sit down to get work done,” I open Cursor<sup id="fnref:cursor"><a href="#fn:cursor" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> and two or three CLI sessions (Claude Code and Codex) for concurrent prompting, and the app locally. The rule of 7±2 applies here. Any more than this melts my brain; there’s a cognitive limit to how many threads I can keep straight in my mind.</p>

<p>Claude Code Sonnet 4.5 and OpenAI Codex 5.1 are the first AI programming tools that have made me think, <strong>“Even if this is as good as it gets, it’s good enough.”</strong><sup id="fnref:models"><a href="#fn:models" class="footnote" rel="footnote" role="doc-noteref">2</a></sup></p>

<p>After 3-4 hours of intense <a href="https://simonwillison.net/2025/Oct/7/vibe-engineering/">vibe-engineering</a>, I’ll trigger the dreaded warnings: <code class="language-plaintext highlighter-rouge">Heads up, you’ve used over 90% of your 5hr limit</code> followed almost immediately with <code class="language-plaintext highlighter-rouge">Credit Balance too low—Add Funds</code>.<sup id="fnref:plans"><a href="#fn:plans" class="footnote" rel="footnote" role="doc-noteref">3</a></sup></p>

<p>With this multiple-agent setup, I can have one agent focused on adding an endpoint while another builds the corresponding front end change and a third updates documentation—all while I chat with Claude/GPT about next steps in another tab. Multitasking multiple facets of the same work with AI does not seem to trigger the same “cognitive reset” as context-switching to another body of work.</p>

<p>When I was a frontline software engineer, I never wanted to leave my IntelliJ IDE. In my mind, code was the law; leaving the IDE broke my flow state. Now, I only review files in the IDE when I need to look at a specific file or configure something for the project.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/multiple-terminal-windows-multiple-agents.png" alt="Multiple terminal windows with multiple CLI agents" style="border-radius: 4px;" /></p>

<h2 id="planning-before-generating-code">Planning Before Generating Code</h2>

<p>Prior planning prevents piss-poor performance. This can be accomplished with the explicit <code class="language-plaintext highlighter-rouge">Plan mode</code> in Cursor or I’ll include in my prompt <code class="language-plaintext highlighter-rouge">NO CODE, Brainstorm only</code> to get started. Only <em>after</em> I’ve explored the problem via back-and-forth conversations with AI and asked it to explain to me what it is going to implement will I then prompt it to start generating code.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/cursor-agent-dropdown.png" alt="Agent Dropdown in Cursor" style="border-radius: 4px;" /></p>

<p>Another way I use AI to plan before programming is to prompt AI to generate wireframes before implementing them. Even with wireframes, AI isn’t perfect, but I’ve found that when employing these planning techniques, AI seems to get more things right.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/claude-code-wireframe.png" alt="Claude Code wireframe" style="border-radius: 4px;" /></p>

<p>If there’s one universal complaint I have during planning: <em>AI can’t shut up</em>. I don’t have empirical evidence on this, but I feel like no matter which model I use or how much I prompt it to “be concise,” I’m constantly skimming and scrolling hundreds of lines of response output.<sup id="fnref:brevity"><a href="#fn:brevity" class="footnote" rel="footnote" role="doc-noteref">4</a></sup></p>

<h2 id="choosing-a-model">Choosing a Model</h2>

<p>As a paying subscriber, I adopt the latest model as soon as it’s available to me. I have yet to experience wanting to use an older model. With Cursor, I can experiment by explicitly choosing specific models in the Agent chooser, but I dislike “auto” mode because it doesn’t reveal which model is being used to generate output. This makes it impossible to “learn” which model performs the best on specific tasks.</p>

<p>Because my basic plans have weekly/hourly limits, I start with whichever model I have the most credits with. The past three months, I’ve almost exclusively defaulted to <a href="https://www.reddit.com/r/ClaudeAI/comments/1pd14xj/thank_you_for_opus_45/">Opus 4.5</a>; it seems to perform best on complex feature development and documentation. I find myself choosing Codex for deep refactoring and code cleanup tasks.</p>

<p>One benefit to running multiple models—in addition to managing costs—is that all models get stuck in ruts, unable to “code” their way out of recursive errors or messing up implementation details. Switching to another model will fix what the other cannot.</p>

<h2 id="what-im-still-assessing">What I’m Still Assessing</h2>

<p>I had experimented and found AGENTS.md/CLAUDE.md files to be underwhelming.<sup id="fnref:thoughtworks"><a href="#fn:thoughtworks" class="footnote" rel="footnote" role="doc-noteref">5</a></sup> I have written <a href="/ai-wrapped-what-ive-shipped-with-ai/#agent-slash-commands-and-reusable-prompts">slash commands</a>, but I’ve found adding <code class="language-plaintext highlighter-rouge">~/code/prompts/**.md</code> in my path works best to share prompts across vendors.</p>

<p>When MCP first dropped, I thought it would revolutionize how I work, but so far I’ve found it far more effective to bring the relevant context into the CLI/Cursor world view. My tools must be setting up MCP servers in the background; how else is Slack or Confluence aware of my Google Drive documents? But I have not found a compelling reason to integrate MCP into Cursor/CLI for the work I’ve done.</p>

<p>I’m not using git-trees or any kind of file collision detection. The agents seem to detect when a file under it has changed and will report “re-reading” a file. I try to logically structure my work across different files, but I’ve yet to get any kind of corrupted file outcomes even when two processes update the same file. In the small chance I don’t like what was generated, I’m one <a href="/assets/posts-images/2025-ai-wrapped-series/git-revert.png">git revert</a> from having only wasted a few minutes.</p>

<p>KISS principle applies: So far, a vanilla setup is getting the work done. I’ll reassess specialized tooling as it matures in 2026.</p>

<h2 id="one-thing-worth-trying">One Thing Worth Trying</h2>

<p>If there’s one thing worth learning in 2026, it’s how to orchestrate multiple AI agents to push forward multiple vectors on any given problem. AI is perfectly happy to concurrently build React components and Ruby functions and Python libraries in the background.<sup id="fnref:stack"><a href="#fn:stack" class="footnote" rel="footnote" role="doc-noteref">6</a></sup></p>

<p>Before AI, the idea of having 2-3 engineers working at one keyboard sounds ridiculous. But that’s essentially what concurrent AI sessions do—multiple sessions writing to many files while you’re free to move to the next problem. AI is a force multiplier because you are no longer bound to one input cursor in one application.</p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Jan 21st, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid 1E26406A-86E8-47D4-AA19-29F6C31592DD</li>
  <li>Dec 16th, 2025 Initial rough draft created.</li>
</ul>

<p><strong>Footnotes</strong></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:cursor">
      <p>Cursor’s ability to work with screenshots is a KILLER feature. <a href="#fnref:cursor" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:models">
      <p>In the time it has taken me to write this, Opus 4.5 and Codex 5.2 have come out. <a href="#fnref:models" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:plans">
      <p>All personal examples using Anthropic Claude and OpenAI GPT with $20/month plans. <a href="#fnref:plans" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:brevity">
      <p>Sometimes I use the CLI to brainstorm not only because it has access to the code but because CLI responses tend to be under 200 words no matter what. <a href="#fnref:brevity" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:thoughtworks">
      <p>It seems <a href="https://www.thoughtworks.com/radar/techniques/agents-md">Thoughtworks</a> has made the same assessment about AGENTS.md. <a href="#fnref:thoughtworks" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:stack">
      <p>Most everything I’ve written with AI is in TypeScript (React/GitHub Actions), Python, Ruby, and shell scripts. <a href="#fnref:stack" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[This is Part 2 of 5 of my 2025 AI Wrapped series. This post covers my actual development environment setup—the specific tools and workflows I use daily to ship production software with nearly 100% AI-generated code.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jsr6720.github.io/assets/posts-images/2025-ai-wrapped-series/multiple-terminal-windows-multiple-agents.png" /><media:content medium="image" url="https://jsr6720.github.io/assets/posts-images/2025-ai-wrapped-series/multiple-terminal-windows-multiple-agents.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">2025 AI Wrapped: Evolution of Using AI Every Day</title><link href="https://jsr6720.github.io/ai-wrapped-evolution-of-using-ai-every-day/" rel="alternate" type="text/html" title="2025 AI Wrapped: Evolution of Using AI Every Day" /><published>2026-01-21T11:01:00+00:00</published><updated>2026-01-21T11:01:00+00:00</updated><id>https://jsr6720.github.io/ai-wrapped-evolution-of-using-ai-every-day</id><content type="html" xml:base="https://jsr6720.github.io/ai-wrapped-evolution-of-using-ai-every-day/"><![CDATA[<p><em>This is Part 1 of 5 of my <a href="/ai-wrapped-series/">2025 AI Wrapped</a> series. This post covers my evolution from “CHOP” programming to integrated CLI/Cursor workflows, why I now build prototypes instead of PowerPoints, and how AI makes it possible for engineering managers to stay technical without impeding their teams.</em></p>

<p>AI makes programming viable for engineering managers again. I’ve been <a href="https://lethain.com/writers-who-operate/">building</a> with AI this past year, and AI has evolved from a frustratingly limited chat bot (2023), to a useful but context-limited research assistant (2024), to something that actually completes complex programming tasks (2025).</p>

<p>The biggest evolution in my work: I can now validate ideas at the speed I used to pitch them. So instead of theorizing about how something might work, I’m validating that it <strong>can</strong> work. By building with AI, I’m “showing” proof of work instead of “telling.”</p>

<p>Before AI, if I had an idea I wanted to explore, I would have to either interrupt my fellow engineers, spend hours muddling through a “hello world” setup, or worse, <a href="/conversation-experience-chatgpt-vs-stack-overflow/">be shut down on StackOverflow</a>. Many ideas died before they could even be evaluated for feasibility.</p>

<p>Now, with AI, I build in the same timeframe I used to spend on PowerPoints and one-pagers. I’m fitting build iterations into gaps between meetings, validating ideas while protecting scarce engineering time. Organizational buy-in still matters, but now I show up to the meeting with working prototypes instead of just a PowerPoint.</p>

<p>Given the choice between building PowerPoints or prototypes, <strong>I choose the prototype, every time.</strong></p>

<h2 id="evolution-of-ai-tools-and-my-github-contributions">Evolution of AI Tools and my GitHub contributions</h2>

<p>This shift didn’t happen overnight. <a href="/how-i-use-llm-ai-tools-everyday/">A year ago</a>, I was using chat-based AI alongside my work, primarily brainstorming ideas and writing code using <a href="https://sourcegraph.com/blog/chat-oriented-programming-in-action">“CHOP”</a>-style programming, i.e., copy/pasting code to/from a web chat agent. Results were so unreliable and inefficient that I would review every proposed changeset as if inspecting each widget coming off an assembly line.</p>

<p>During 2025, AI has gone from “nice to have” to critical for collapsing the learning curve and navigating project domains. <a href="/ai-wrapped-my-setup-for-programming/">My setup</a> is Anthropic Claude Code, OpenAI Codex, and Cursor as an integrated workflow. In 2024, I used one chat window; now I run multiple AI agents concurrently—generating code, brainstorming next steps, and running terminal commands.</p>

<p>With the newest frontier models (GPT 5.1, Claude 4.5), 95 percent<sup id="fnref:imperfect"><a href="#fn:imperfect" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> of AI-generated code compiles and is useful enough to iterate upon. Now, instead of inspecting each unit of generated code, I’ll use GitHub Desktop to review the changeset, use more AI cycles to improve code quality, and finally push it for peer-review.</p>

<p>My GitHub activity shows this evolution in three distinct phases.</p>

<p>2023: This was the “engineering managers don’t have time to code” phase. I did very little hands-on work, and me trying to code would just burden my team. The below contribution chart reflects the occasional PR review, and some prototype/hackathon work in October. Even with advancing releases of ChatGPT, it just wasn’t efficient to build with. Note: Zero weekend activity on personal projects.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/github-profile-2023.png" alt="GitHub Contributions 2023" style="border-radius: 4px;" /></p>

<p>2024: AI would frequently reach context limits and lose the plot when working with long conversations. AI was also unable to iterate on its own changes, often clobbering its previous work; most of these commits are in the CHOP style of programming. See also: Weekend work increasing because AI models are getting good enough to accelerate my personal projects.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/github-profile-2024.png" alt="GitHub Contributions 2024" style="border-radius: 4px;" /></p>

<p>2025: By early 2025, I had fully embraced a Cursor/Claude Code/Codex setup. I’m building real solutions and, frankly, <em>loving</em> programming again. AI helps me orient on a project, accomplish tasks, and have meaningful impact with tiny slices of time.</p>

<p><img src="/assets/posts-images/2025-ai-wrapped-series/github-profile-2025.png" alt="GitHub Contributions 2025" style="border-radius: 4px;" /></p>

<p>The best engineering managers operate where they can multiply their team’s effectiveness. For years, this meant staying out of the code entirely and slowly losing their relatability with newer technologies. With AI, engineering leaders don’t have to choose between staying technical and staying effective. AI has collapsed the cost of personal experimentation—and in doing so, it’s evolving what the engineering manager role can be.</p>

<p>During 2025, with the help of AI, I’ve <a href="/ai-wrapped-what-ive-shipped-with-ai/">shipped</a> 20+ software solutions that span personal projects, prototypes/PoCs, back-office tools, small feature work, and bug fixes. None of this was possible before AI.</p>

<hr />

<p><strong>Significant Revisions</strong></p>

<ul>
  <li>Jan 21st, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid A4DC88EC-35BC-420A-8834-0EB2F1F1AD66</li>
  <li>Dec 16th, 2025 Initial rough draft created.</li>
</ul>

<p><strong>Footnotes</strong></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:imperfect">
      <p>This hasn’t been perfect. See <a href="/ai-wrapped-what-hasnt-worked/">“What Hasn’t Worked”</a> for the other 5%. <a href="#fnref:imperfect" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[This is Part 1 of 5 of my 2025 AI Wrapped series. This post covers my evolution from “CHOP” programming to integrated CLI/Cursor workflows, why I now build prototypes instead of PowerPoints, and how AI makes it possible for engineering managers to stay technical without impeding their teams.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jsr6720.github.io/assets/posts-images/2025-ai-wrapped-series/github-profile-2025.png" /><media:content medium="image" url="https://jsr6720.github.io/assets/posts-images/2025-ai-wrapped-series/github-profile-2025.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Does Anyone Know a Good Software Engineer?</title><link href="https://jsr6720.github.io/does-anyone-know-a-good-software-engineer/" rel="alternate" type="text/html" title="Does Anyone Know a Good Software Engineer?" /><published>2026-01-18T02:42:02+00:00</published><updated>2026-01-18T02:42:02+00:00</updated><id>https://jsr6720.github.io/does-anyone-know-a-good-software-engineer</id><content type="html" xml:base="https://jsr6720.github.io/does-anyone-know-a-good-software-engineer/"><![CDATA[<p>You’ve either asked this question or someone has asked you: “Do you know a good <em>__</em>?” A project needs to be done, and you’re terrified of hiring the <em>wrong</em> person. The contractor who leaves your house half-finished for months. Or, worse, someone who builds a façade of completion only for it to fail inspection or fall apart under real-life use.</p>

<p>Software engineering is having that same moment with vibe-coded apps that <a href="https://www.businessinsider.com/tea-app-data-breach-cybersecurity-ai-vibe-coding-safety-experts-2025-8">fail in production</a>. Now, instead of Ryobi packout kits and HGTV convincing homeowners they’re contractors, it’s AI-generated code convincing non-engineers they’re programming experts.</p>

<h2 id="big-box-stores-made-everyone-a-contractor">Big Box Stores Made Everyone a Contractor</h2>

<p>Big-box home improvement stores have been a boon to those of us who enjoy working on our homes.<sup id="fnref:risk"><a href="#fn:risk" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> My DIY projects—running an extra outlet, replacing fixtures, and patching drywall—have earned polite nods from the electricians, carpenters, and plumbers I’ve had at my house to do serious work. The same polite nods software engineers give “vibe-coded” PRs.</p>

<p>Today, AI can generate code almost instantly.<sup id="fnref:syntax"><a href="#fn:syntax" class="footnote" rel="footnote" role="doc-noteref">2</a></sup> Professional craftsmanship distinguishes a software engineer from a technical product manager with a vision. Just as contractors build to code, professional engineers convert prototypes into reliable, secure, observable systems that survive real users.</p>

<p><strong>AI can start any idea; Experts finish them</strong></p>

<p>The explosion of AI-generated projects is a mirror of the HGTV-inspired DIY Homeowner Special. AI makes starting ideas cheap. It can be used as a <a href="https://www.fastcompany.com/91452231/ai-is-turning-product-managers-into-builders">rapid-prototype builder</a> and accelerate writing code from days to hours.</p>

<p>But what AI cannot do is compensate for the limitations of someone’s expertise. AI assumes everything will go according to plan. Professionals know it never does. When you open the wall and find knob-and-tube wiring, or the product team pivots requirements mid-sprint, you need someone who knows how to adapt—not just follow instructions. Recursively prompting AI to “fix the last error, no mistakes” is like painting over mold: The surface is clean, but the underlying rot remains.</p>

<p>Every project reaches a point where specialized<sup id="fnref:labor"><a href="#fn:labor" class="footnote" rel="footnote" role="doc-noteref">3</a></sup> experts are needed, and I for one am ecstatic to see so many new projects being started with AI; it means more projects will need experts to ship them. So, when someone asks, “Does anyone know a good software engineer,” I know that what they’re really asking is, “Who can we trust to finish what we started?”</p>

<hr />

<h3 id="significant-revisions">Significant Revisions</h3>

<ul>
  <li>Jan 18th, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid 6956020C-60C0-4AB1-AB55-948CA8DDA6E1</li>
  <li>Dec 27th, 2025 Draft Created - extracted from my work on <a href="/ai-wrapped-series/">2025 AI Wrapped series</a></li>
</ul>

<h3 id="footnotes">Footnotes</h3>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:risk">
      <p>I do the small jobs, the long tail of home repair, because the risk/reward trade-off makes it cost effective to do myself. I also never start a job without having a professional in mind to come bail me out. <a href="#fnref:risk" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:syntax">
      <p>Every year, SWE is less about writing good syntax and more about building solutions. <a href="#fnref:syntax" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:labor">
      <p>As organizations grow, specialization occurs through division of labor. This is why every profession needs a broad base of junior apprentices/journeymen. Today’s juniors are tomorrow’s seniors. <a href="#fnref:labor" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="programming" /><summary type="html"><![CDATA[You’ve either asked this question or someone has asked you: “Do you know a good __?” A project needs to be done, and you’re terrified of hiring the wrong person. The contractor who leaves your house half-finished for months. Or, worse, someone who builds a façade of completion only for it to fail inspection or fall apart under real-life use.]]></summary></entry><entry><title type="html">AI Predictions for 2026</title><link href="https://jsr6720.github.io/ai-predictions-for-2026/" rel="alternate" type="text/html" title="AI Predictions for 2026" /><published>2026-01-11T19:10:20+00:00</published><updated>2026-01-11T19:10:20+00:00</updated><id>https://jsr6720.github.io/ai-predictions-for-2026</id><content type="html" xml:base="https://jsr6720.github.io/ai-predictions-for-2026/"><![CDATA[<p>While writing my <a href="/ai-wrapped-series/">2025 AI Wrapped</a> series and reading <a href="https://www.wired.com/story/backchannel-2026-predictions-tech-robots-ai/">others’</a>, <a href="https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/">AI</a> <a href="https://garymarcus.substack.com/p/six-or-seven-predictions-for-ai-2026">predictions</a>, I’m publishing some of my own.</p>

<h3 id="the-human-element">The Human Element</h3>

<ul>
  <li>The human touch is and will always be a paid premium service. Trust and complexity still require people, not prompts.</li>
  <li>Building software will remain a human-led team endeavor. <a href="https://ordep.dev/posts/writing-code-was-never-the-bottleneck">Writing code was never the limiting factor to shipping</a>.</li>
  <li><a href="https://www.businessinsider.com/meta-vibe-coding-build-prototype-apps-mark-zuckerberg-2025-11">Product managers</a> and <a href="https://jackcaldwell.dev/articles/the-product-minded-engineer">product-oriented engineers</a> who build with AI as an accelerator will shine. Pitching prototypes will be more important than pitching decks—the world of “show me.”</li>
</ul>

<h3 id="governance--accountability">Governance &amp; Accountability</h3>

<ul>
  <li>Accountability will remain with someone who knows how to verify and fix outputs. That won’t be AI; that will be a person. When the pipes leak, nobody asks the wrench; they call the plumber.</li>
  <li>Human domain expertise will become the validation layer to AI-generated outputs. AI is very good at generating plausible-sounding nonsense.</li>
  <li>Good corporate governance will demand data provenance and introspection capabilities for AI systems. Think Sarbanes-Oxley but for AI models. Companies need to know “how the sausage is made” and that they won’t violate any laws with AI solutions they’re implementing.<sup id="fnref:lawsuits"><a href="#fn:lawsuits" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></li>
  <li>Token consumption will become an established line item in engineering department budgets, just as SaaS subscriptions and AWS spend are broken down by resource types.</li>
</ul>

<h3 id="market-forces">Market Forces</h3>

<ul>
  <li>AI agents’ “memories” and personal contexts will become the moat. Just as it is the apps, not the hardware specs, that keep you from switching from iOS to Android, it is the historical context that companies have about you that will keep you from switching AI providers.<sup id="fnref:privacy"><a href="#fn:privacy" class="footnote" rel="footnote" role="doc-noteref">2</a></sup></li>
  <li>VC subsidies will collapse and product enshittification will ensue. Companies have been focused on how to get LLMs working, not how to make them profitable. Ads are coming.<sup id="fnref:ads"><a href="#fn:ads" class="footnote" rel="footnote" role="doc-noteref">3</a></sup></li>
  <li>Frontier models (Claude Sonnet 4.5/Open AI GPT 5.1) are good enough for production use. Getting investment dollars to train a new models will become more difficult.<sup id="fnref:investments"><a href="#fn:investments" class="footnote" rel="footnote" role="doc-noteref">4</a></sup></li>
  <li>The rush to ink deals and outlay billions to build out datacenters will fail to materialize in 2026. Not only does physical construction and permitting take years, tech companies’ hubris has blinded them to the mounting resistance to data center expansion that has been <a href="https://www.economist.com/united-states/2025/10/30/the-data-centre-backlash-is-brewing-in-america">steamrolling local communities</a>.</li>
</ul>

<hr />

<h3 id="significant-revisions">Significant Revisions</h3>

<ul>
  <li>Jan 11th, 2026 Originally published on <a href="https://jsr6720.github.io">https://jsr6720.github.io</a> with uid 1E3F5F54-892B-448F-B80C-00043F851288</li>
  <li>Dec 31st, 2025 Draft Created. A little late on starting predictions.</li>
</ul>

<h3 id="footnotes">Footnotes</h3>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:lawsuits">
      <p>I do think lawsuits will continue to be brought against AI companies for their scraping of data and fair use claims. But I don’t know enough about this topic to stake a claim on it. <a href="#fnref:lawsuits" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:privacy">
      <p>And probably another reason to delete your account from time to time. <a href="#fnref:privacy" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:ads">
      <p>That didn’t take long. <a href="https://x.com/OpenAI/status/2012223373489614951">OpenAI testing ads</a> <a href="#fnref:ads" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:investments">
      <p>The billions spent so far to build existing models will not be recovered. <a href="#fnref:investments" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>James Rowe</name></author><category term="engineering" /><category term="ai," /><category term="predictions" /><summary type="html"><![CDATA[While writing my 2025 AI Wrapped series and reading others’, AI predictions, I’m publishing some of my own.]]></summary></entry></feed>