<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Aaron Russell]]></title><description><![CDATA[Aaron Russell's personal blog about design, development, AI, tech, and life.]]></description><link>https://aaronrussell.dev/</link><generator>RSS for Node</generator><lastBuildDate>Mon, 16 Feb 2026 17:06:28 GMT</lastBuildDate><atom:link href="https://aaronrussell.dev/feed.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[Agentic engineering]]></title><description><![CDATA[<div><img src="https://aaronrussell.dev/_astro/letting-go.BSDhixSf_u2mlC.webp" srcset="/_astro/letting-go.BSDhixSf_u2mlC.webp 1x, /_astro/letting-go.BSDhixSf_ZEpSXl.webp 2x" /></div><p>A year has passed since Andrej Karpathy <a href="https://x.com/karpathy/status/1886192184808149383">coined the phrase “vibe coding”</a>. He described a new way of coding with AI - sitting back, loosely throwing prompts into Cursor, clicking “Accept All”, barely reading the code. It was provocative, but it captured something real about how AI was changing what it feels like to write software - and even what it means to be a software engineer.</p>
<p><a href="https://aaronrussell.dev/posts/vibe-coding">In my own take</a>, I argued we were at the start of a transition from craft coding to vibe coding. The direction of travel seemed inevitable, but the journey would be longer and bumpier than some were promising. AI was great fun for throwaway projects, but not for serious, complex software. Not yet, at least.</p>
<p>This wasn’t a contrarian take. At the time, models struggled with non-trivial problems, and tools like Claude Code and Codex hadn’t even been released. What I didn’t appreciate is that in AI, a year is a very long time.</p>
<h2 id="inflection-point">Inflection point</h2>
<p>Something shifted in late 2025. Yes, the models got better. Claude Opus 4.5 and OpenAI’s GPT-5.1-Codex both landed in the final months of 2025, and the jump in capability was significant. But what really caught my attention was watching how developers’ behaviour was changing.</p>
<p>Karpathy himself described going from <a href="https://x.com/karpathy/status/2015883857489522876">80% manual coding to 80% agent coding</a> in the space of a few weeks, calling it “the biggest change in ~2 decades of programming.” And it wasn’t just him. Developers I know or have followed for years - people who’d been cautious about AI, even sceptical - suddenly started talking (a lot) about how they were using Claude Code.</p>
<p>Then along came <a href="https://openclaw.ai/">OpenClaw</a> (previously ClawdBot, then MoltBot). Unless you’ve been living under a rock, you’ve heard of it by now. The open source AI agent platform, built by one developer, Peter Steinberger, went from <a href="https://www.cnbc.com/2026/02/02/openclaw-open-source-ai-agent-rise-controversy-clawdbot-moltbot-moltbook.html">zero to 100,000 GitHub stars in days</a>, spawned mainstream media coverage, a trademark dispute with Anthropic, and suddenly everyone was talking about it, installing it, breaking things with it. What’s most remarkable is how quickly it happened. Looking at the timeline and the scope of the codebase, you just know this wouldn’t have been possible for one person without AI.</p>
<p>If the early 2025 position was that AI coding worked for toys but not for real engineering, that position is getting harder and harder to defend.</p>
<h2 id="developer-ego">Developer ego</h2>
<p>It’s taken me a while to recognise this, but adopting AI isn’t just about new tools and workflows. A big (and understated) part of it is letting go of our ego.</p>
<p>AI code - even today - rarely looks like code that I would write. It’s more verbose, it duplicates things, it has weird patterns that just <em>smell</em> unmistakably of AI. And sometimes it’s worse than that - it’s simply wrong. An agent will get itself into a tangle, “fixing” problems with the wrong solution and digging itself deeper into a hole of its own making.</p>
<p>These limitations are weirdly comforting. They let developers argue we’re still needed. Not just needed - <em>superior</em>. We can look at AI code and think: “I wouldn’t do it like that. My code is better.” And honestly, I think we like being the craftsman and the problem solver. We get fulfilment from it. It’s natural to resist change that threatens to take that away.</p>
<p>I’m not alone in this. Karpathy admitted the shift to agentic coding “hurts the ego a bit.” Eric S. Raymond recently observed that programmers have <a href="https://x.com/esrtweet/status/2019271201311322570">“a tremendous amount of ego and identity invested in the craft of coding”</a>. A developer on the <a href="https://stackoverflow.blog/2026/01/23/ai-can-10x-developers-in-creating-tech-debt/">Stack Overflow podcast</a> compared it to going from “a craftsman whittling a perfect chair” to “a factory manager at Ikea.”</p>
<p>Whether we publicly acknowledge it or not, I think most developers - especially those who’ve been around a while - are going through something that feels like a professional identity crisis. I know I’ve had moments wondering if my career has had its best days. Am I done?</p>
<p>I realised I needed a deliberate shift in how I think about my role. I’ve become far more “vibe coder” than I would have been comfortable admitting a year ago. A more maximalist view where the machines are responsible for writing the code, writing tests, reviewing code, fixing bugs. I don’t read every line of code AI produces - it’s impossible to. But I maintain a high-level understanding of the architecture, just enough to steer the AI. Occasionally, if it gets stuck, I’ll dive in far enough to understand why. I rarely fix a bug directly - I just point the AI in the right direction.</p>
<p>All of this feels uncomfortable. It grates not just against my ego, but against decades of collective wisdom about what good software engineering looks like. But that’s what makes this moment revolutionary - the rules are being rewritten, whether we like it or not.</p>
<h2 id="making-peace-with-slop">Making peace with slop</h2>
<p>One telltale sign of a developer who’s embraced AI development is their GitHub contributions heatmap suddenly turning to a shade of green that dazzles compared to previous weeks. But all that code - which, let’s be honest, is at best getting a cursory glance from its human committer - raises legitimate questions about quality, maintainability and security.</p>
<p>The slopacolypse is real and it’s happening. The way I’ve come to see it isn’t to worry whether the AI code is sloppy (it is), but to ask what kind of slop actually matters.</p>
<p>There’s one bucket I’d call <em>stylistic slop</em>: verbose code, odd patterns, unnecessary abstractions. Things that offend your sensibilities as a craftsman, but don’t actually break anything. The code works, and it would have taken you weeks to write by hand. Learning to live with this is probably in your interests now.</p>
<p>Then there’s the more dangerous stuff - <em>toxic slop</em>: real security vulnerabilities, exposed APIs, architectural decisions that create genuine risk. This is what matters most, and spotting it in the firehose of code flying at you from all directions is a hard problem that AI engineers need to solve.</p>
<p>OpenClaw illustrates both sides perfectly. It’s proof that one developer with AI can build something that captivates millions of people in a matter of weeks. It’s also a case study in what happens when toxic slop ships at scale. Within weeks of going viral: a <a href="https://ethiack.com/news/blog/one-click-rce-openclaw">critical RCE vulnerability was discovered</a>; over <a href="https://wz-it.com/en/blog/openclaw-secure-deployment-security-hardening-guide/">42,000 control panels</a> were found exposed to the internet; some <a href="https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto">386 malicious skills</a> were identified in the ecosystem; and the spinoff site Moltbook - a social network exclusively for AI agents - <a href="https://www.the180i.com/1-5-million-api-keys-were-exposed-on-moltbook-anyone-could-have-impersonated-andrej-karpathy/">leaked 1.5 million API keys</a> within days of launching.</p>
<p>OpenClaw is simultaneously the best advert for AI engineering and the strongest argument against it. What I take from this is that the line between “this offends my craftsmanship” and “this is actually wrong” is fuzzy and unclear. We need to focus human judgement where it counts most - on the architecture, the security and the real-world quality of what we ship.</p>
<h2 id="rip-vibe-coding">RIP vibe coding</h2>
<p>I recognise there’s a bit of snobbery here, but I cringe every time I hear the words “vibe coding”. The term has leaked into wider discourse, and I’ve even heard my normie friends use it. To my mind, it’s become a pejorative - synonymous with low-effort hacks, security-flawed apps, MVPs that fall apart the moment real users turn up.</p>
<p>So I was glad to see Karpathy has effectively <a href="https://x.com/karpathy/status/2019137879310836075">retired his own term</a>, proposing “agentic engineering” as a replacement. His point is that the tools have matured past the stage where “vibing” describes what’s actually happening. Addy Osmani <a href="https://addyo.substack.com/p/vibe-coding-is-not-the-same-as-ai">makes the same distinction</a>.</p>
<p>Subagents, agent swarms, multi-agent orchestration, concepts like the <a href="https://devinterrupted.substack.com/p/inventing-the-ralph-wiggum-loop-creator">Ralph Wiggum loop</a> - terms that didn’t exist eighteen months ago - are now common parlance among developers working with AI. It’s still a wild west - pioneers figuring out the best way to do this stuff. But we’ve come a long way from “build me a Flappy Bird clone.”</p>
<p>I don’t have this figured out - none of us do. It’s disorientating realising that the thing you thought you were good at suddenly has a new rulebook. That inner struggle is real, and it won’t be unique to software engineering. It’s just that we got here first.</p>]]></description><link>https://aaronrussell.dev/posts/agentic-engineering</link><guid isPermaLink="false">27de36be-decb-5cc8-906d-1e98d5c621d0</guid><pubDate>Sat, 14 Feb 2026 00:00:00 GMT</pubDate></item><item><title><![CDATA[30 Days 1: Slopacolypse]]></title><description><![CDATA[<div><img src="https://aaronrussell.dev/_astro/30-days.B1ipNdLn_1pgu1g.webp" srcset="/_astro/30-days.B1ipNdLn_1pgu1g.webp 1x, /_astro/30-days.B1ipNdLn_Zuim0T.webp 2x" /></div><p>Inspired by my friend Jordan’s <a href="https://weeknotes.elver.me/">weeknotes blog</a>, I’m trying something new: a monthly collection of thoughts, links, and reflections. Weekly would never last, and honestly I have my doubts about monthly too, but it’s a new year so let’s give it a go.</p>
<h2 id="-its-almost-the-end-of-january">😱 It’s almost the end of January</h2>
<ul>
<li>I always get a little positivity boost every new year. There’s something cleansing about getting back into a routine after a week of Christmas, turning a new page, and starting new projects.</li>
<li>But, January has a habit of kicking any positivity in the balls. It’s cold, wet, dark and miserable. Sometimes there are days when the only outside I see is a 15 minute dog walk that even the dog is unhappy about.</li>
<li>And oh my god, don’t get me started on the news. Seriously considering a news/social media detox for my sanity.</li>
</ul>
<h2 id="-shiny-new-websites">✨ Shiny new websites</h2>
<ul>
<li>This website you’re looking at now is a new one. <a href="https://aaronrussell.dev/posts/the-cobblers-children-finally-have-shoes">I wrote about it here</a>.</li>
<li>My consultancy website, <a href="https://pushcode.com/">Push Code</a>, also got a fresh lick of paint this month. The plan is to keep the business site ultra minimal, more like a business card site, and share more about the work I do over on this site.</li>
</ul>
<h2 id="-slopacolypse">💦 <a href="https://x.com/karpathy/status/2015883857489522876">Slopacolypse</a></h2>
<ul>
<li>Definitely my favourite word of the year so far. Karpathy really does have a knack for capturing the zeitgeist of this era of computing.</li>
<li>I’m noticing more and more people, who previously I would have described as AI cautious (if not outright sceptical), suddenly espousing AI coding and telling the world about it.</li>
<li>Definitely feels like we’ve hit some kind of inflection point. Opus 4.5 and GPT-5.2 Codex have made the benefits so compelling that the slop is a price worth paying.</li>
<li>Talking of slop, someone sent me a pull request for one of the OSS packages I maintain that was very clearly vibe coded and essentially rewrote every single line of code. Please don’t do that.</li>
</ul>
<h2 id="-hype-bot">🤩 <a href="https://clawd.bot/">Hype-bot</a></h2>
<ul>
<li>It was impossible last weekend to use X without being bombarded by a wall of posts proclaiming, <em>“OMG this is INSANE”</em>, or <em>“How I automated my entire life with this one free tool”</em>… they were of course talking about <del>Clawdbot</del>, erm, <a href="https://x.com/moltbot/status/2016058924403753024">Moltbot</a>.</li>
<li>Being as susceptible to clear and obvious overhype as the next man, I immediately downloaded and installed it. Within 30 minutes I had a personal bot I named Claudette, up and running on a free Oracle Cloud VPS. Definitely didn’t see a need to buy a Mac Mini for it. Genuinely, I loved it and had a fun few days playing around.</li>
<li>For now I would strictly recommend setting this up on a machine you don’t mind fucking up. It’s YOLO mode cranked to 11. I tried to link my Mac via the companion app so Claudette could access my Things via the <code>things-cli</code>. Turned out she didn’t have <code>/opt/homebrew/bin</code> in her <code>$PATH</code> and I caught her messing around with LaunchAgent <code>.plist</code> files to update the <code>$PATH</code> environment variable. I find that both impressive and terrifying in equal measure.</li>
<li>Over a few days of light experimental use, Claudette was burning about $10 of tokens every single day 😬.</li>
<li>Clawdbot was a daft name in an <a href="https://x.com/steipete/status/2016079236780449975">obvious brand-infringing</a> kind of way.</li>
</ul>
<h2 id="-tailwind-drama">🔥 <a href="https://github.com/tailwindlabs/tailwindcss.com/pull/2388#issuecomment-3717222957">Tailwind drama</a></h2>
<ul>
<li>Big web-dev drama of the month was Tailwind letting 75% of it’s dev team go, due to - according to its creator - <em>“the brutal impact AI has had on our business”</em>.</li>
<li>I love Tailwind and use it in almost every project I start. This is a real shame and I hope they can find a way forward.</li>
<li>Intuitively, I’ve never felt open source is a particularly solid foundation for a standalone business. I guess by now Tailwind has established itself as too important to not exist. Both <a href="https://x.com/rauchg/status/2009336725043335338">Vercel</a> and <a href="https://x.com/OfficialLoganK/status/2009339263251566902#m">Google</a> have stepped in to sponsor and if I was a betting man I’d put money on one of them acquiring Tailwind in the coming weeks.</li>
</ul>
<h2 id="️-are-you-yanking-my-pizzle">⚔️ Are you yanking my pizzle?</h2>
<ul>
<li>No more <a href="https://www.deepsilver.com/games/kingdom-come-deliverance-ii">Kingdom Come Deliverance 2</a> for me - I finally got to the end of the medieval Bohemia-themed RPG last week. Honestly, I think my family are pleased about this.</li>
<li>I’ve been in love with this game. Despite having its fair share of murder and pillaging, it’s a pretty wholesome and charming game, with a good <a href="https://www.youtube.com/watch?v=2l2ebeSBMfs">sense of humour</a>. I found myself fully invested in Henry’s character, and the dynamic between Henry and the other main characters - especially Hans - is superbly written.</li>
<li>When my other half asked me where I’d like to go on holiday this year, the only place I could think of was, <em>“erm, Bohemia”</em>. And so it is, that’s where we’re going. Jesus Christ be Praised!</li>
</ul>]]></description><link>https://aaronrussell.dev/posts/30days-slopacolypse</link><guid isPermaLink="false">b0caa315-8fb0-5104-adb6-9c16b999c8e6</guid><pubDate>Thu, 29 Jan 2026 00:00:00 GMT</pubDate></item><item><title><![CDATA[The cobbler's children finally have shoes]]></title><description><![CDATA[<div><img src="https://aaronrussell.dev/_astro/cobblers-shoes.00XGxqvk_2W6nh.webp" srcset="/_astro/cobblers-shoes.00XGxqvk_2W6nh.webp 1x, /_astro/cobblers-shoes.00XGxqvk_1NATFc.webp 2x" /></div><p>I’m not sure where the proverb <em>“the cobbler’s children have no shoes”</em> comes from, or why cobblers specifically get called out, but the irony is universal: people with the skills to make things for themselves often don’t, because they’re too busy making them for everyone else.</p>
<p>I propose a modern equivalent: <em>“The programmer’s website is under construction.”</em></p>
<p>The last time I launched a site with my name in the domain was, embarrassingly, over 17 years ago. So what you’re looking at right now - <a href="https://www.aaronrussell.dev">my own new website</a> - is the equivalent of a cobbler’s child in a pair of shiny new shoes!</p>
<h2 id="the-gap-years">The gap years</h2>
<p>Back in 2008 - the last time I launched a website for myself - I blogged regularly about design, development, and whatever nerdy stuff I was excited about that week. But over time, life got busy, client work picked up, and my own site - unloved and neglected - quietly died.</p>
<p>For over a decade, I didn’t have a proper online home. Which is wild for someone who makes a living on the internet. My consultancy, <a href="https://www.pushcode.com">Push Code</a>, has had a few licks of paint, but even that has had periods of neglect, and it’s never been <em>my</em> space. I was active on Twitter (back when it used to be good) but as social networks have slid into the bot-filled, rage-inducing hellholes they are today, I’ve felt more and more digitally homeless.</p>
<p>In 2024 I launched <a href="https://2point0.ai">2point0.ai</a> as a way to get back into writing online. I kept it up for just over a year, posting about AI in development, and it felt good to be writing again. But the focus was too narrow. It was a blog about AI, not a home for everything I do. I realised I needed to go back to basics - a personal website, where I can show off my work, and write about whatever I want.</p>
<h2 id="how-i-built-it">How I built it</h2>
<p>Unlike the proverbial cobbler - whose services must have been in constant demand, judging by the whole child/shoe situation - I’ve been through a period of what you might politely call <em>“in-between opportunities”</em>.</p>
<p>With no client requirements (or clients, for that matter) to worry about, I got started on designing and building this site. No <a href="https://www.aaronrussell.dev/posts/vibe-coding">vibe coding</a> - just plain, handcrafted HTML, CSS and JavaScript. Like the good old days.</p>
<p>Under the hood it’s Astro, Tailwind, and a minimal sprinkling of Svelte in a couple of components. I love Astro - it gives you framework conveniences without the framework baggage, while always keeping you close to the web fundamentals. It gets the balance right - you feel like you’re building a website, not an application.</p>
<p>I had some fun with the <a href="https://aaronrussell.dev/about">about</a> page, where I built an interactive chronology that tells my story from a 15-year-old kid onwards, featuring some AI-generated, comic book style images of me through the years (with progressively less hair). The <a href="https://aaronrussell.dev/work">work</a> page covers a mix of client projects, my open source Bitcoin and blockchain work, and recent dabblings in AI tooling.</p>
<p>And then there’s the <a href="https://aaronrussell.dev/blog">blog</a> - a place for me to just <em>be</em>. I expect I’ll write about development, the web, AI and other nerdy stuff - but also whatever else I feel like writing about. I’ve migrated a few posts from <a href="https://2point0.ai">2point0.ai</a>, but this is a fresh start: no constraints, no narrow focus - just writing about what interests me.</p>
<h2 id="whats-next">What’s next</h2>
<p>New year, new website. Feels like a good way to start 2026. So what else have I got cooking?</p>
<p>I’ve got a whiteboard with half a dozen rough project ideas - a mix of AI-based apps and products, plus some open source projects I want to take a proper look at. My goal is to ship at least two <em>things</em> in the first half of this year. I also want to get better at AI-assisted coding: finding that elusive balance between letting the agents run with the code <em>and</em> keeping tight control of direction and quality.</p>
<p>I’m also open to new opportunities - contract work or even the right full-time role. If you’re building something interesting and think we might be a good fit, <a href="https://aaronrussell.dev/about">get in touch</a>.</p>
<p>For now though, I’m just happy to have a proper home on the web again. And that those poor children finally have some shoes.</p>]]></description><link>https://aaronrussell.dev/posts/the-cobblers-children-finally-have-shoes</link><guid isPermaLink="false">d5c0d69d-f471-5ee8-9e1b-b9d0ad4032c2</guid><pubDate>Wed, 07 Jan 2026 00:00:00 GMT</pubDate></item><item><title><![CDATA[Vibe coding]]></title><description><![CDATA[<div><img src="https://aaronrussell.dev/_astro/vibe-coding.CJ6jqZwg_HwQih.webp" srcset="/_astro/vibe-coding.CJ6jqZwg_HwQih.webp 1x, /_astro/vibe-coding.CJ6jqZwg_fM3t7.webp 2x" /></div><p>import { Image } from ‘astro:assets’;
import crossFaderImg from ’./images/2025/cross-fader.jpg’;
import karpathyTweetImg from ’./images/2025/karpathy-vibe-coding.webp’;
import jonasTweetImg from ’./images/2025/jonas-probability.webp’;
import sahilTweetImg from ’./images/2025/sahil-junior-devs.webp’;</p>
<p>I like to think of myself as an “old-skool” dev - though let’s be honest, that’s just a hip way of saying I’m an old dev that’s been writing code since another millennium.</p>
<p>It wasn’t that long ago that professionals working on the web would describe their work using the language of a “craftsman”. We were digital artisans, building pixel perfect designs, creating delightful digital experiences and shipping hand-crafted code that we were proud of.</p>
<p>These days developers have a new language that captures the zeitgeist of the modern AI engineer: <a href="https://x.com/karpathy/status/1886192184808149383">Vibe coding</a>.</p>
<figure class="not-prose max-w-2xl mx-auto my-8">
  <img src="{karpathyTweetImg}" alt="@karpathy on Vibe Coding" width="672" densities="{[1,2]}" quality="{80}" class="image">
  <figcaption class="sr-only" aria-hidden="">
    <blockquote>
      <p>There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.</p>
      <footer>
        Posted by <a href="https://x.com/karpathy/status/1886192184808149383">@karpathy</a>
      </footer>
    </blockquote>
  </figcaption>
</figure>
<p>Karpathy goes on to describe vibe coding in practice as some kind of slider.</p>
<blockquote>
<p>All the way on the left you have programming as it existed ~3 years ago. All the way on the right you have vibe coding.</p>
<p><a href="https://x.com/karpathy/status/1886193527224517106">@karpathy</a></p>
</blockquote>
<p>I like the slider analogy and see this as a transition from craft to vibes. It’s the modern day equivalent of when textile work shifted from a highly skilled craft to an entirely automated process. The transition didn’t occur overnight, but one <a href="https://en.wikipedia.org/wiki/Spinning_jenny">Spinning Jenny</a> here and a <a href="https://en.wikipedia.org/wiki/Jacquard_machine">Jacquard Loom</a> there, and the next thing you know the craft of textile working is no more. The march of progress is inexorable, and we’ve seen that transition repeat many times through history.</p>
<p>But where exactly are we on this slider? And if I can stretch the analogy even further, if our slider is a DJ’s cross-fader, are we in for a quick chop, or is this going to be a long fade mix?</p>
<figure class="not-prose max-w-3xl mx-auto my-8">
  <img src="{crossFaderImg}" alt="Vibe slider" width="768" densities="{[1,2]}" quality="{80}" class="image">
</figure>
<h2 id="current-state-of-play">Current state of play</h2>
<p>I recently spent a few days with a company that are cranking that vibe slider as far as they can. They’re building their own agent tooling that will take a boilerplate app and spec, and use AI to create a fully functioning app in 15 to 30 minutes.</p>
<p>It’s really impressive to watch an agent loop over the spec and iterate on the build step by step. What they’ve built so far is great, and one of their generated apps has been out in the wild for a while and is already generating revenue.</p>
<p>But… their generated apps are still what you might consider (and I’m quoting their CEO here), <em>“shitty little apps”</em>. We’re talking single feature apps - a small database, user auth, a handful of routes and a couple of endpoints. For a competent dev, it’s the kind of thing you could bash out in a day or two.</p>
<p>I can pick other bones too. When the agents struggle with a more complex feature, what they’re typically doing is hand writing the code and wrapping it in a function so the AI-generated code can implement the feature just by calling a function. That’s a totally pragmatic thing to do, but we’re already edging that slider back towards craft coding.</p>
<p>I think what this company is doing is really cool, and their ambition is far greater than shitty little apps. But I mention all this because I think it reflects the messy reality of vibe coding. Anyone who’s used Cursor for anything more complex than a one-shot Flappy Bird clone will know there’s a lot of effort involved in prompting, providing context, and testing and validating outputs. Without a skilled human pulling the strings, the vibes can get ugly, quickly.</p>
<h2 id="prompting-and-context-challenges">Prompting and context challenges</h2>
<p>Prompting isn’t just about describing a task or instruction. Agents need to know everything about your codebase in quite a lot of detail. They need to know the stack, the key dependencies, and they need to see lots of examples of how you actually want them to code.</p>
<p>And as a project and its architecture evolves, so too must the prompts and fragments of prompts that your agents depend on. Managing all of this is quite a lot of effort. Being able to prompt clearly and effectively is a skill that not every developer is going to be blessed with. But for vibe coding, it’s an essential.</p>
<p>There are <a href="https://cursorlist.com/">sites popping up</a> for users to share <code>.cursorrules</code> files for different stacks, and some developer tools and libraries are starting to share <a href="https://supabase.com/docs/guides/getting-started/ai-prompts">prompt fragments in their documentation</a>. This is all great to see and does help the vibe coder find their way in all this.</p>
<p>Context management poses another challenge. In principle, agents work better when they are shown the right context at the right time, and not necessarily all the context all the time. Ideally, IDEs would make this <em>just work™</em>, but in practice this is tricky stuff to get right and another messy aspect of vibe coding that developers need to grapple with.</p>
<h2 id="validation-challenges">Validation challenges</h2>
<p>When we’re writing code ourselves, we inherently validate our work as we go. But with vibe coding, validation becomes a critical challenge that can make or break the entire approach. We can either fly blind, or we can implement robust validation strategies and take advantage of <a href="https://x.com/jonas/status/1879573296913756389">mathematics and probability</a>.</p>
<figure class="not-prose max-w-2xl mx-auto my-8">
  <img src="{jonasTweetImg}" alt="@jonas on Probability" width="672" densities="{[1,2]}" quality="{80}" class="image">
  <figcaption class="sr-only" aria-hidden="">
    <blockquote>
      <p>If an AI agent gets a task (say, building an app) right only 1/10 times, it means that with enough money it can get it right 99.99% of the time.</p>
      <p>For that to work, though, the agent needs good validators that tell it whether it did the right thing.</p>
      <footer>
        Posted by <a href="https://x.com/jonas/status/1879573296913756389">@jonas</a>
      </footer>
    </blockquote>
  </figcaption>
</figure>
<p>The idea here is if an agent succeeds only 10% of the time, if you run enough of those agents (88 I think, but someone brainier than me can explain the maths), then there’s a 99.9% chance at least one of them succeeds.</p>
<p>All that depends on having good tests and checks in place to determine when an agent gets it right. There’s some low-hanging fruit here: your agents should definitely be seeing your Typescript compile errors and linter warnings for a start.</p>
<p>And then there’s testing. In a vibe coding maximalist world the AI will write the tests too, but for now this is a role for developers whilst they push the vibe slider as far as they’re comfortable with.</p>
<h2 id="ui-and-ux-challenges">UI and UX challenges</h2>
<p>If moving the vibe coding slider to the right comes at a cost, by far the greatest cost in my view, is the detrimental effect on quality of design and user experience.</p>
<p>Whilst agents can do a reasonable job throwing components together using frameworks like shadcn and similar, let’s be honest, AI-gen design is always very, very average. We’re talking functional but uninspired layouts, predictable typography and color schemes that you know you’ve seen a thousand times over. It’s bland, generic, just “meh” kinda stuff.</p>
<p>Sometimes that’s fine. If it’s a private or internal project, we can make these kinds of compromises. But for a real, consumer facing, commercial app, I think the vibe coder falls well short of any craft coder with good design chops.</p>
<p>There’s a missing piece here: not just models that have visual understanding, but models that are specifically trained on decades of examples of great design and can review and can improve their own code from a UX point of view.</p>
<p>I’m sure somewhere there are some smart people working on exactly that, but for now I think vibe coders just fundamentally don’t have the tools for doing good design and UX.</p>
<h2 id="the-times-they-are-a-changin">The times, they are a changin’</h2>
<p>The software engineer, as a role, is well and truly in the mix. You can hear the beat - DJ Vibes is moving that slider from left to right. But I don’t think we’re quite as far into the transition as <a href="https://x.com/shl/status/1887484068075274347">others seem to think</a>.</p>
<figure class="not-prose max-w-2xl mx-auto my-8">
  <img src="{sahilTweetImg}" alt="@shl on Junior Devs" width="672" densities="{[1,2]}" quality="{80}" class="image">
  <figcaption class="sr-only" aria-hidden="">
    <blockquote>
      <p>No longer hiring junior or even mid-level software engineers.</p>
      <p>Our tokens per codebase:</p>
      <p>
        Gumroad: 2M<br>
        Flexile: 800K<br>
        Helper: 500K<br>
        Iffy: 200K<br>
        Shortest: 100K
      </p>
      <p>Both Claude 3.5 Sonnet and o3-mini have context windows of 200K tokens, meaning they can now write 100% of our Iffy and Shortest code if prompted well.</p>
      <p>Our new process:</p>
      <p>
        1. Sit and chat about what we need to build, doing research with Deep Research as we go.<br>
        2. Have AI record everything and turn it into a spec.<br>
        3. Clean up the spec, adding any design requirements / other nuances.<br>
        4. Have Devin code it up.<br>
        5. QA, merge, (auto-)deploy to prod.
      </p>
      <footer>
        Posted by <a href="https://x.com/shl/status/1887484068075274347">@shl</a>
      </footer>
    </blockquote>
  </figcaption>
</figure>
<p>Sahil’s post doesn’t really pass the sniff test. Not withstanding the obvious fact that if you don’t have junior devs, then it isn’t long until you don’t have senior devs either, I simply don’t think the tooling and models are ready for this. But as a vision for what a maxed out vibe coding environment looks like… maybe?</p>
<p>I still <a href="https://2point0.ai/posts/reports-of-codings-death-are-greatly-exaggerated">don’t believe coding is dead</a>, and I don’t believe junior or mid level devs are done for, but I do believe this slide to the right is in play. The role of a developer or software engineer, across all levels, is fundamentally changing. Whilst we can dream of sitting around on beanbags, chatting, and brainstorming while the agents do the coding, the reality is we’re trading writing code for wrestling with prompts, managing context, and writing many, many tests and validators.</p>
<h2 id="conclusions-any">Conclusions… any?</h2>
<p>The slide from craft to vibes isn’t a clean transition - it’s messy, complex, and full of hard problems that need to be solved - and we’re still very much in the early days of this transition.</p>
<p>The reality is that effective vibe coding today requires a peculiar mix of skills: traditional coding expertise, prompt engineering finesse, and a willingness to test outputs far more vigorously than you’d test your own outputs. It’s less about replacing development skills and more about augmenting them with new tools and approaches.</p>
<p>Just as the industrial revolution was a more gradual transition than the age’s technological advancements suggested - a transition limited by the practical realities of human adaptation - the shift to vibe coding will likely be just as nuanced. The greatest opportunities will come to those who can navigate this gap between AI capability and human readiness.</p>
<p>For now, the slider sits closer to craft than vibes. The future may yet bring Sahil’s vision of developers as spec-writers and AI-wranglers into reality, but the path there isn’t as straight or as short as some might suggest. The beat goes on, the mix continues, and somewhere between craft and vibes, we’ll find our groove.</p>]]></description><link>https://aaronrussell.dev/posts/vibe-coding</link><guid isPermaLink="false">d3d3a82e-8492-5af9-84c2-40104fcbd104</guid><pubDate>Wed, 12 Feb 2025 00:00:00 GMT</pubDate></item><item><title><![CDATA[Announcing Agentflow]]></title><description><![CDATA[<div><img src="https://aaronrussell.dev/_astro/agentflow-banner.CQOrsD8e_1uUJii.webp" srcset="/_astro/agentflow-banner.CQOrsD8e_1uUJii.webp 1x, /_astro/agentflow-banner.CQOrsD8e_1wKJIE.webp 2x" /></div><p>import { Image } from ‘astro:assets’;
import notAnotherOneImg from ’./images/2024/not-another-one.webp’;
import agentflowExampleImg from ’./images/2024/agentflow-example.webp’;</p>
<p>I’m very pleased to announce the launch of <a href="https://agentflow.2point0.ai">Agentflow</a> - a powerfully simple AI agent framework.</p>
<p>Agentflow enables developers to create AI workflows that read like documentation but execute like code. By combining natural language with minimal markup, you can compose complex agent behaviours using plain English as your main programming language.</p>
<p>The framework provides all the structure and control you’d expect from traditional programming but is expressed in an intuitive Markdown-based format that puts readability first.</p>
<h2 id="not-another-one-why"><em>Not another one</em>. Why?</h2>
<img src="{notAnotherOneImg.src}" class="image" alt="Not another one!">
<p>The path to Agentflow began with a simple frustration. I am not a Python developer. Frameworks like CrewAI and LangChain dominate the AI agent space, but I found myself wanting similar tooling using languages I was more comfortable with and that better suited my development style.</p>
<p>I’ve also noticed a trend that many agent frameworks are quite complex and enterprise-focused. I didn’t want to set up servers and infrastructure; I just wanted something lightweight and easy that I could hack around with on my own computer.</p>
<p>Not too many moons ago I shared <a href="https://2point0.ai/posts/meet-the-news-shift">my work on Shifts</a>, an Elixir-based agent framework inspired by CrewAI. I definitely learnt some lessons building Shifts, but it didn’t feel like I’d nailed it. So, I put it down to think about the problem again.</p>
<p>There’s an old <a href="https://x.com/karpathy/status/1617979122625712128">tweet from Andrej Karparthy</a> that simply says, “The hottest new programming language is English”. I love that idea, and it got me thinking… What if we could create a framework that embraces that idea, and puts the human intent front and centre? Rather than bringing prompts into code, could we somehow make the prompts the code? This thinking led directly to what would become Agentflow.</p>
<h2 id="what-makes-agentflow-different">What makes Agentflow different?</h2>
<p>From the start, Agentflow embraces natural language as its primary input. Instead of constructing agents with code that contains prompts, with Agentflow you write prompts that contain minimal code. This makes the composition of agents simpler, but more importantly, it puts human intent front and centre of the development process.</p>
<p>Workflows are written using a variation of Markdown similar to <a href="https://mdxjs.com/docs/what-is-mdx">MDX</a> (in fact, it uses the MDX parser internally). The result is a format that is as readable as plain text and with enough structure to implement loops, conditional branching and call actions inside the workflow.</p>
<figure class="not-prose max-w-4xl mx-auto my-8">
  <img src="{agentflowExampleImg}" alt="Example workflow in Agentflow" width="1024" densities="{[1,2]}" quality="{80}" class="image">
  <figcaption class="py-2 text-xs text-zinc-400 text-center">Example workflow in Agentflow</figcaption>
</figure>
<p>Staying true to this text-first philosophy, Agentflow operates entirely through a command-line interface - no GUI required. It’s lightweight and easy to install, with no infrastructure or cloud requirements. You can <a href="https://agentflow.2point0.ai/guide/getting-started.html">get everything up and running on your machine</a> in under a minute.</p>
<p>Under the hood, Agentflow is powered by Vercel’s excellent <a href="https://sdk.vercel.ai">AI SDK</a>. This means baked-in support for any AI provider supported by the SDK (that’s pretty much all of them), and agents can use any third-party tools compatible with AI SDK. The result is a framework that is as capable as it is approachable, whether building simple automations or sophisticated AI-powered workflows.</p>
<h2 id="looking-ahead">Looking ahead</h2>
<p>Agentflow is still in its early stages and there’s a lot of work ahead. Like any new software, there will be bugs to fix, rough edges to smooth out, and for now the design and APIs are all subject to change as I learn how people use Agentflow out in the wild.</p>
<p>And that’s exactly why I’m releasing Agentflow now. I want to see how people use it, learn from their experiences, and understand what they need. All feedback is valuable at this stage, whether it’s bug reports or feature requests. While there are clear areas for improvement, such as expanding the collection of first-party tools, or thinking carefully about security when users start sharing workflows and actions, the exact direction will be influenced by users’ needs and experiences.</p>
<p>My hope for Agentflow is simple: I want to make it easier for anyone to start experimenting with and building AI agents. Whether you’re an experienced developer or a weekend-only hacker and tinkerer, Agentflow’s JavaScript foundations and natural language programming should feel intuitive and accessible. I’m excited to see what people build with it, how they extend it, and where they take it.</p>
<h2 id="get-building">Get building</h2>
<p>Ready to try Agentflow? Head over to the <a href="https://agentflow.2point0.ai">Agentflow docs</a> - the <a href="https://agentflow.2point0.ai/guide/getting-started.html">getting started guide</a> will walk you through installation and creating and running your first workflow.</p>
<p>For those interested in reporting bugs, getting involved, or just reading the source code, head over to the <a href="https://github.com/lebrunel/agentflow">Agentflow repository</a> on GitHub.</p>
<p>I’m looking forward to seeing what you build with it!</p>]]></description><link>https://aaronrussell.dev/posts/announcing-agentflow</link><guid isPermaLink="false">31ed4fae-6ce9-5812-9a9a-0451150207ed</guid><pubDate>Tue, 17 Dec 2024 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zed AI review: Getting the balance right in AI code editing]]></title><description><![CDATA[<div><img src="https://aaronrussell.dev/_astro/zed-ai-review.BJNignXu_23WTT.webp" srcset="/_astro/zed-ai-review.BJNignXu_23WTT.webp 1x, /_astro/zed-ai-review.BJNignXu_PD9rT.webp 2x" /></div><p>import { Image } from ‘astro:assets’;
import assistantPanelImg from ’./images/2024/zed-assistant-panel.webp’;
import inlineAssistImg from ’./images/2024/zed-inline-assist.webp’;
import workflowImg from ’./images/2024/zed-workflow.webp’;</p>
<p>Developers are unwaveringly loyal when it comes to their choice of code editor. Just ask anyone who’s spent the time and energy needed to master vim… that’s a path they will never turn back from.</p>
<p>For developers, switching code editor is a rare - almost epochal - event. My own journey from Dreamweaver (yep!) to Sublime Text to VS Code is one that played out over 20 years. But recently, with AI becoming an essential part of many developers’ tool kits, a new generation of code editors that place AI front and centre of the developer experience are emerging to shake up the status quo.</p>
<p>Whilst Cursor has been grabbing <a href="https://www.cursor.com/blog/series-a">all the headlines</a>, it is far from the only player in town. Last month, <a href="https://zed.dev/blog/zed-ai">Zed announced Zed AI</a>, its own AI-assisted coding tools packaged within an already impressive code editor.</p>
<p>For the past few weeks I have been using Zed as my daily driver. This article won’t be a full review of Zed, but I will share my experiences and impressions of Zed’s AI features after a month of pretty heavy use. Is this one of those epochal moments? Will I be switching for good? Read on to find out.</p>
<h2 id="what-is-zed">What is Zed?</h2>
<p>I first came across Zed earlier this year when I noticed a few developers in the Elixir community speaking highly of it. Launched in 2023, Zed is founded by <a href="https://zed.dev/team">the team</a> who built the GitHub Atom editor. In Zed, they’ve built a fast, modern code editor. It uses its own highly performant rendering engine built with Rust, and Zed has been steadily winning fans by focussing on the developer experience and getting the basics right.</p>
<p>The introduction of AI features adds to Zed’s previous attention to collaborative chat and pair-programming features. I’m unsure if this represents a slight pivot in plan, but either way Zed AI turns Zed from an interesting looking editor that didn’t quite have enough to peel me away from VS Code a few months ago, into an editor that I definitely want to take for a spin now.</p>
<h2 id="zed-ai-features">Zed AI features</h2>
<p>Zed’s AI offering can broadly be split into three main features:</p>
<ul>
<li>The <strong>assistant panel</strong> for chatting with AI about your code.</li>
<li>The <strong>inline assistant</strong> for using AI to directly transform your code.</li>
<li>Slash style <strong>assistant commands</strong> for injecting additional context into your chats and interactions.</li>
</ul>
<p>Let’s take a look at each:</p>
<h3 id="assistant-panel">Assistant panel</h3>
<p>At a glance, the assistant panel looks and feels like every other AI chat interface you’ve ever used. But it’s definitely not quite like <em>any</em> other chat interface. The entire chat - what Zed calls a “context” - is one continuous text file that feels the same as working with any other text file in Zed. You can move the cursor anywhere, work with multiple cursors, use key bindings, and so the process of adding bits of code to your context feels completely natural and seamless.</p>
<figure class="not-prose max-w-2xl mx-auto my-8">
  <img src="{assistantPanelImg}" alt="Zed Assistant Panel" width="672" densities="{[1,2]}" quality="{80}" class="image">
  <figcaption class="py-2 text-xs text-zinc-400 text-center">Zed Assistant Panel</figcaption>
</figure>
<p>There is keyboard shortcut (<code>cmd-></code>) for inserting selected text from the main editor into the assistant panel in a fenced code block. There is also a range of slash commands (which we’ll explore a bit further below) for injecting code into your context - for example an entire file or directory of files. These injected elements appear in the assistant panel as a toggle-able element that can be expanded in full or minimised, so your focus can remain on your prompt.</p>
<p>When you’re ready, hitting <code>cmd-enter</code> submits your context and the AI assistant streams a response to the bottom of the context.</p>
<p>One useful side effect of the entire context being an editable buffer of text is that it’s trivial to “rewrite history” by removing or editing previous parts of the conversation. You can fork an entire context by simply copying and pasting the entire thing into a new context and taking the same base conversation in two different directions.</p>
<p>I’m not aware of any other chat UI working quite like this, and so the Zed team deserve some applause here. They’ve nailed the UX, and I find myself wanting to do other, non-coding tasks in Zed (for example, writing drafts for this blog) purely because I love the experience of flipping to and from the assistant panel so much.</p>
<h3 id="inline-assistant">Inline assistant</h3>
<p>Wherever you can select text - the main editor, the terminal, even the assistant panel - you can invoke the inline assistant and prompt it to directly transform your selected code.</p>
<p>When you invoke the inline assistant (<code>ctrl-enter</code>), if you also have the assistant panel open, that is provided as context in addition to your inline prompt. That’s not totally intuitive, but once you understand that’s the way it works it’s a powerful way to combine a detailed prompt as context with a simple inline instruction.</p>
<figure class="not-prose max-w-3xl mx-auto my-8">
  <img src="{inlineAssistImg}" alt="Zed Inline Assist" width="768" densities="{[1,2]}" quality="{80}" class="image">
  <figcaption class="py-2 text-xs text-zinc-400 text-center">Zed Inline Assist</figcaption>
</figure>
<p>The transformed code is then presented to you as a diff which you can either accept or reject. You can also use multiple cursors and highlight multiple selections to invoke inline assistant in several places at the same time. I use this all the time for adding doc blocks to functions.</p>
<p>One recipe in the Zed docs for fixing errors is to create a new context in the assistant panel, use the <code>/diagnostics</code> and <code>/terminal</code> commands to inject diagnostics and error logs into the context, select the misbehaving code and use inline assistant to generate a fix. It works wonderfully.</p>
<h3 id="assistant-commands">Assistant commands</h3>
<p>Within the assistant panel, you can hit forward slash (<code>/</code>) to bring up a list of available commands. These commands effectively inject content from your code or elsewhere in the editor into the current context.</p>
<p>For example, the <code>/diagnostics</code> command will inject any output and warnings from the language server; the <code>/file</code> command allows you to select a file or folder of files to insert into the context; and the <code>/fetch</code> command injects the response of a HTTP request. All the built-in commands are listed in the <a href="https://zed.dev/docs/assistant/commands">Zed docs</a>.</p>
<p>If the built-in commands don’t serve all your needs, you can create your own. Well, apparently you can. Documentation is a bit thin here so I haven’t tried to create a custom slash command, but in theory this could be a very powerful way to inject context from other sources, such as documentation, websites and APIs, or other local files.</p>
<p>One command worth special mention is <code>/workflow</code>. If you invoke it and expand the toggle, it reveals a giant system prompt that instructs the LLM to guide the user through a series of stepped changes and respond in a very specific, structured format. The assistant panel then recognises this structure and presents the “steps” with a UI where you can work through the changes step by step.</p>
<figure class="not-prose max-w-2xl mx-auto my-8">
  <img src="{workflowImg}" alt="Workflow command" width="672" densities="{[1,2]}" quality="{80}" class="image">
  <figcaption class="py-2 text-xs text-zinc-400 text-center">Workflow command</figcaption>
</figure>
<p>You can see where they’re going with this. This brings Zed closer to Cursor’s Composer view, where you can work with the assistant iteratively to build out a specific feature or refactor purely through prompting. In my testing, I couldn’t really get <code>/workflow</code> to work reliably. It would produce a multistep workflow which had “step resolution errors”, or some steps would repeat or conflict with previous steps. It’s a promising looking feature, but in practice feels like you need to cross all your fingers and make a sacrifice to the AI gods if you want it to actually work.</p>
<h2 id="what-do-developers-want-from-ai-in-code-editors">What do developers want from AI in code editors?</h2>
<p>I know quite a few experienced software developers who are either irrationally anti-AI, or just apathetic towards AI. On the flip side, many younger developers are attracted to the idea of a prompt-driven, no-code software development process - just see all the <em>“How I used Cursor to build this [INSERT APP/GAME] clone in MINUTES”</em> videos doing the rounds on social media.</p>
<p>Needless to say, I think both these takes are missing the mark. The real gains of using AI in software development come from a more nuanced middle ground.</p>
<p>I use AI every single day for things like:</p>
<ul>
<li>Thinking through a problem, and comparing and evaluating different high-level solutions.</li>
<li>Writing enough code to “show me the way” so that I can pick up a pseudo implementation and use it in a way more specific to my code base.</li>
<li>Fixing errors by sharing the error and the code and letting AI generate the fix.</li>
<li>Chores like writing unit tests and adding code documentation and annotations.</li>
</ul>
<p>What I don’t expect or want AI to do is write every single line of code in my app or product. I can’t see how it’s possible for that to happen without myself being so disconnected from the process that I no longer feel I can influence, or be responsible for, the quality (or lack of) of the code. Even if the models get better over time, and I expect they will, I feel this is a misguided aspiration that will result in worse developers, worse code and worse products.</p>
<p>So for now, the flashy, whizz-bang Cursor demos don’t really impress me, and don’t reflect what I’m looking for in an AI coding assistant.</p>
<h2 id="zed-improves-the-dev-experience-with-balanced-ai-features">Zed improves the dev experience with balanced AI features</h2>
<p>Which brings us back to Zed. How does it really stack up?</p>
<p>First off, before we even start talking about AI, Zed is a solid, modern and very fast code editor that gets the basics right. Being a new editor, it’s not without bugs, but I didn’t come across anything other than minor UX/UI annoyances. It also lacks support for one or two more niche language syntaxes, but all popular languages are well-supported.</p>
<p>Zed’s AI features are refined and thought through. The UX of the assistant panel feels integrated and seamless. Treating the entire chat “context” as one giant text file feels so natural. And when we need to inject context, the built-in slash commands are simple, intuitive and work perfectly.</p>
<p>When you need AI to directly transform your code, the inline assistant works great. Combining the assistant panel with multiple cursors and inline assistants allows for some pretty powerful and efficient workflows.</p>
<p>One thing Zed doesn’t have is AI tab autocompletion. I know some people love it, but I’ve always felt that’s like Russian Roulette coding.</p>
<p>And this is why I say Zed’s AI features feel thought through. Has tab autocomplete been omitted because they just haven’t built it yet, or did they weigh up how much value these features add to the developer experience first? I get the feeling it’s the latter.</p>
<p>I’m sure they will add more AI over time - the <code>/workflow</code> command feels very much like a work in progress, and it will be hard to resist trying to keep up with the more AI-centric approach of Cursor over time. But I hope and trust Zed keep their focus on developers and the developer experience. At the moment, they’re striking the balance well.</p>
<p>Zed’s hosted AI features are powered by Claude and are currently available for free. I presume that won’t always be the case, but there are no details currently on pricing. However, it’s also possible to add your own API keys for whatever AI providers you fancy, including Anthropic. So if and when it becomes a paid service, you should be able to run with your own keys if preferred. This, at least to me, makes it a more attractive prospect than Cursor where the only option is a subscription.</p>
<h2 id="conclusion">Conclusion</h2>
<p>So, after a month of use, are we witnessing the beginning of a new epoch? Is VS Code now destined to gather dust in my dock whilst Zed gets all the attention? The answer is, <strong>yes</strong>!</p>
<p>I’m genuinely loving using Zed, and the way it’s AI features are designed and integrated feel just right for <em>me</em> and what I wish from AI at this time.</p>
<p>But… here’s the catch. I’ve compared Zed to Cursor a few times in this article, and I’ve poured a bit of scepticism on the whole <em>“I built Notion in minutes”</em> thing… but I’m not being entirely fair here because I’ve not tried Cursor to the extent that I’ve been using Zed this past month.</p>
<p>So, my challenge for October is to put Cursor through its paces too. We will see if it wins me over or changes my view on exactly what kind of AI features I think I need. So stay tuned, subscribe to <a href="https://aaronrussell.dev/feed.xml">the feed</a> and <a href="https://x.com/aaronrussell">follow me on X</a> to be the first to read my Cursor review next month.</p>]]></description><link>https://aaronrussell.dev/posts/zed-ai-review</link><guid isPermaLink="false">21bf1c96-57e4-5b72-8db4-7825ce6cc890</guid><pubDate>Fri, 20 Sep 2024 00:00:00 GMT</pubDate></item></channel></rss>