My AI Agent Coding Workflow

AI 21 January 2026 5 minute read

I've been using AI coding agents for a few years and things have changed drastically over the 12 months. Here's how I go about orchestrating them to build solid features, get the best results and focus on the bigger picture.

My AI Agent Coding Workflow

Workflow Overview

Right off the bat I'll tell you this:

My current workflow looks like this:

  1. Generate a plan with Cursor (Auto)
  2. Use a custom command to ask plan questions (/ask-plan-questions)
  3. Use a custom command to add more granular todos (/add-plan-todos)
  4. Implement with Claude Opus in OpenCode (Desktop)
  5. Test The Results

Send me a message if you'd like the prompts I use for the custom commands.

1. Generate a Plan with Cursor (Auto)

I start by writing out how I want the feature to work. Let's use the example of adding Google Gemini AI integration to an app (meta, I know).

Here's the rough prompt I would use:

Let's add Google Gemini support for text generation

Help me come up with a plan to support Gemini as a provider

Wait, that's too short, isn't it?!

It would be if I stopped there. I already have a few AI integrations in my project, so Cursor's Auto mode is smart enough to give me a basic plan.

It's all about context when it comes to working with AI, for both you and the agent.

I then save the plan to my project in .cursor/plans/plan-name.plan.md - it's important to know I've added .cursor/plans/ to my .gitignore as I don't want all those outdated files in source control.

Cursor's Auto mode allows you to spend less money and let Cursor decide which model to use and how.

You can specify which model you want, but you'll end up paying for premium usage. I did this recently and used all $60 of allowed premium usage in my Pro plan in about 2 days.

Note: An important thing to do if building an integration is to give the model a URL to the documentation (markdown version preferably). For my Gemini integration I pasted a URL to the API docs and let Cursor update the plan accordingly.

2. Custom Command to Ask Plan Questions

This is where the power of custom commands really help a lot in Cursor.

Alternatively you can use skills in Claude Code. They work in a very similar, almost identical way for doing this workflow.

/ask-plan-questions

This prompts Cursor to ask me more in depth questions about the plan so it can further understand the requirements to prevent mistakes.

I'll usually run this command 2 or 3 times until I'm satisfied the plan covers everything I want.

Note: A good sign the model understands what you want is when it starts asking the same, or slightly different, questions you've already answered.

3. Custom Command to Add Granular Todos

This is a very important step I only recently started doing, but it makes a massive difference to the final output.

I ask Cursor to add more todos to the plan to prevent misunderstanding.

/add-plan-todos

This let's the model carefully review each todo it's already added and make them more granular. Sometimes this means going from 7 todos to 20.

Trust me, the more todos you have the easier it'll be for your model to implement the plan.

Note: AI model's give you what you ask for, not necessarilly what you want or need.

Once done it's time to jump into the OpenCode desktop app.

4. Implement with Claude Opus in OpenCode (Desktop)

Why do I use OpenCode desktop instead of Claude CLI? Or even OpenCode CLI? Honestly, I just prefer point and click UI. I've used GitHub Desktop for the longest time and have never missed using the terminal.

It's also nice to have a UI of all your projects in OpenCode, as well as having nicely displayed content from your current AI chat.

In OpenCode I go to the project I'm currently building and give it a prompt such as:

Implement everything from .cursor/plans/plan-name.plan.md

Then I let it go to work.

The nice thing about using OpenCode desktop is it sends you a notification when it's done.

Note: In OpenCode you can use the @ symbol to link to a file, but as all plans are ignored via .gitignore OpenCode won't be able to link to the file (it can still read it fine though).

5. Test The Results

This is the manual bit you still need to do (I know, who even does that anymore).

You can either start reviewing the files or jump straight into manually testing.

I switch the order, depdending on what I'm doing, but I always do both.

This is usually a good time to make sure there's sufficient test coverage in place and, if not, get Claude to generate some.

Closing Thoughts

This workflow isn’t about chasing shiny tools, it’s about reducing the back and forth. The more effort you put into planning, questioning, and breaking work down upfront, the less you’re asking the model to guess what you want.

Treat AI like a very fast, very literal collaborator. Give it clear context, granular tasks, and room to ask questions, and it’ll consistently deliver strong results. Skip those steps, and you’ll spend more time fixing than building.

If there’s one takeaway: planning is no longer overhead, it’s the leverage you need for a better product.