What Are Claude Code Subagents? (And How to Make the Most of Them)
Claude Code subagents run research and analysis in parallel, in their own context, so your main conversation stays clean. What they are, how to use them, and what breaks.
Claude Code subagents are separate Claude instances that run research, analysis, or tasks in parallel, each in their own context window, while your main conversation stays clean. Most builders don’t know they already have three built-in subagents waiting in Claude Code right now. This guide covers what they are, which ones you already have without setup, when to spawn them, when to skip them, and what breaks when you get the decision wrong.
When I was comparing Claude, ChatGPT, and Gemini Computer Use, I asked Claude to research all three in parallel. It spawned three subagents. Each went deep on one product. While they worked, I kept going.
Same output. A fraction of the time. No code.
That’s what subagents are:
Separate Claude instances that run simultaneously, each focused, each isolated, each returning only the result.
Not a developer feature. Not something that requires setup. A workflow pattern that changes how you research, build, and plan, once you know it's there.
What’s inside:
How Claude Code subagents work — the three mechanics that matter (and the one most builders miss)
Built-in subagents you already have — three waiting in Claude Code right now, zero setup
Parallel vs. sequential subagents — the one decision that shapes every subagent workflow
What to use subagents for — five use cases with real return, no code required
How to create a subagent in 5 minutes — minimum viable config, description field that routes correctly
5 failure modes and fixes — what breaks and how to prevent it before your first run
Starter configs to copy — three minimal setups ready to paste
Hi, I’m Jenny 👋
I build AI systems and tools, then share how I did it. I run the Practical AI Builder program — for people who already use AI and want to build real things with it. Check it out if that sounds like you.
If you’re new to Build to Launch, welcome! Here are some places to start:
How Claude Code Subagents Work
A subagent is a separate Claude instance that the main session spawns to handle a specific task.
That distinction matters. It’s not a different tab. It’s not a memory slot. It’s a full Claude instance with its own 200K-token context window, its own tool access, its own system prompt. When it finishes, it returns one thing: the result. All the intermediate work stays inside its context and never touches your main conversation.
What it is:
A focused worker you dispatch for one job. It reads, researches, or acts. It comes back with a summary. You never see the noise.
What it’s not:
A continuation of your current conversation. The subagent starts fresh. It has no memory of what you’ve been doing, no context from your main session, no inherited instructions unless you explicitly pass them. This is a feature, not a gap. Isolation is what makes subagents fast and clean.
The three mechanics that matter:
No nesting. Subagents cannot spawn their own subagents. The hierarchy stays flat: main session, workers. The flat hierarchy is intentional. It keeps the system predictable.
No inherited skills. If you have custom skills in your .claude/skills/ folder, the subagent doesn’t automatically get them. You declare what the subagent can use in its config.
The description field is what Claude reads when routing. Vague descriptions cause missed delegations or wrong routing. The specifics of how to write a good description field come in the setup section below.
The actual value of a subagent isn’t parallelism. It’s isolation. Every task you offload to a subagent stops that task’s output from flooding your main context. That’s what makes long sessions viable.
A Claude Code subagent is a separate Claude instance with its own 200K-token context, its own tools, and its own system prompt — it returns one result, and all intermediate work stays inside its context.
If the orchestrator/worker model is new to you — AI Agents for Everyone breaks down the broader framework before you start wiring subagents together.
Built-In Claude Code Subagents: What You Already Have
Before building anything custom, open Claude Code and run /agents.
Claude Code ships with three built-in subagent types that run automatically based on task characteristics. Most builders have never opened the list.
Explore
Optimized for fast, read-only search. Runs on Haiku by default, which makes it cheaper and faster than sending the same task through Opus. If you’re exploring a codebase, a folder, or a document set, Explore handles it without burning Opus tokens on something that doesn’t need that model.
Plan
A read-only research agent that runs in plan mode. Useful for outlining and mapping before committing to any action. When Claude presents an implementation strategy before writing code, that’s often Plan running automatically behind the scenes.
General-purpose
All tools, full access, complex multi-step tasks. The fallback when a task involves both exploration and modification.
These exist in your Claude Code right now. Zero setup. The point: subagents aren’t a configuration project. They’re already active.
Claude Code ships with three built-in subagent types — Explore (fast, read-only), Plan (strategy mode), and General-purpose (full access) — and none of them require setup to use.
If you’re still getting oriented with what Claude Code can actually do beyond the basics, the Claude Code project ideas hub shows the full range before you start customizing anything.
For the full map of Claude Code features, tools, and guides in one place — the Claude Code Hub is where I keep everything organized.
Parallel vs. Sequential Subagents: When to Use Each
Every subagent decision comes down to one question: can these tasks start at the same time, or does one need the other’s output to begin?
Use parallel when:
Tasks are independent. Neither task needs results from the other to get started. Each can complete its work without waiting. This is the situation most people imagine when they think about subagents.
Competitive research across five companies. Each company can be investigated simultaneously. No one report depends on another. That’s parallel.
Multiple document types that need similar analysis. Each subagent handles one document, returns a summary, main session synthesizes. Independent.
Use sequential when:
Step B genuinely needs Step A’s output to proceed. A research phase that feeds a spec phase. A spec that feeds a review. The chain is real, not imaginary.
Sequential subagents are still useful because they preserve context isolation at each stage. But they don’t save time the way parallel does. The benefit is clean handoffs, not speed.
Skip subagents entirely when:
The task fits in a single prompt. Spawning a subagent to answer a question you could ask directly burns setup time and tokens for no gain. Subagents are not always better. They’re better when isolation or parallelism matter.
You need the subagent to ask clarifying questions mid-run. It can’t. Subagents work until they finish or stop. The task brief needs to be complete before you send it.
The dynamic spawning approach:
Instead of specifying an exact number of subagents, describe what you need and let Claude decide the topology.
“Research these five competitor products. Use however many subagents you need.”
Claude reads the scope and spawns accordingly. For five independent targets, it’ll typically spin five agents. For three targets where two share significant overlap, it might use three or merge some work. You don’t have to count. You describe the job.
This works best for research, competitive analysis, content audits, and anything where the “how many” question is a function of scope you can’t fully predict in advance.
One practical constraint: on the Pro plan, five or more parallel subagents running Opus simultaneously can hit rate limits in under 20 minutes. If you’re running a large parallel task and hitting limits, either reduce the agent count or upgrade to Max.
The rule: parallel when tasks are independent, sequential when step B genuinely needs step A’s output, skip subagents entirely when the task fits in a single prompt.
Token spend scales fast with parallel subagents — Claude Code Token Optimization covers the practical controls for keeping costs manageable as you scale up.
What Can You Use Claude Code Subagents For? (No Code Required)
Five patterns that have real return, and none of them require code.
Competitive and market research
This is the use case with the most immediate return. Assign each competitor or market segment to a separate subagent. Each returns a summary. The main session synthesizes into one output. Domain-specific research agents extend this pattern when you need depth on a single vertical, not breadth across competitors.
The prompt that works:
Research [Company A], [Company B], and [Company C] in parallel using however
many subagents you need. For each: current positioning, recent product changes,
pricing, and one thing they do better than the others. Return a summary for
each, then a synthesis.
Three subagents. Three focused deep-dives. One synthesis. This is how the comparison article research worked: three products, three parallel investigations, the final article built from the combined output. No sequential reading. No tab-switching. No lost context.
Feature spec generation
Three perspectives on a product decision, generated simultaneously: what does the PM angle look like, what does a user experience perspective surface, what are the technical constraints. Three subagents, three documents, main session merges.
This produces a richer spec than any single-threaded conversation because each agent goes deep on its lane without the others bleeding in.
Content and topic audits
When you need to assess multiple topic clusters, past articles, or competitor content categories at once, subagents let you cover ground that would take hours to do sequentially. Each subagent handles one cluster, returns a structured summary, and the main session builds a prioritized content plan from the combined view.
Context isolation for heavy output
Some tasks produce output so voluminous that keeping them in the main conversation makes everything else slower. Log analysis, bulk search results, reference research across many sources: offload these to a subagent. It works through the noise in its own context. It returns the signal. Your main conversation stays navigable.
Background research while you work
Subagents run until they’re done. Assign a research task, switch to other work, come back to results. This isn’t fully asynchronous in the technical sense, but it’s close enough to matter. The agent doesn’t need you watching. You don’t need to babysit it. That time compounds.
How to Create a Claude Code Subagent in 5 Minutes
Two paths. Both require no coding.
Path 1: Use the /agents command (easiest)
Open a Claude Code session and type
/agents.Go to the Library tab and select “Create new agent.”
Choose Personal to save it to
~/.claude/agents/so it works across all your projects. Choose Project to keep it scoped to one repo.Describe what the agent does in plain language. Claude will draft the full config based on your description and the current project context. Review and adjust the
descriptionfield in the output — this is the field Claude reads to decide when to route to this agent. Write it as if you’re telling Claude when to use it: what kind of task, what it returns, when to trigger it.Save. The agent is immediately available.
Path 2: Create the Markdown file directly (more control)
Create a .md file inside .claude/agents/ (project-level) or ~/.claude/agents/ (user-level). The filename becomes the agent’s reference name.
Minimum required frontmatter:
---
name: research-agent
description: >
Deep research specialist. Use when the task involves investigating
multiple sources, reading documentation, or compiling information
from several locations. Returns a structured summary.
tools: WebSearch, WebFetch, Read, Grep, Glob
model: claude-sonnet-4-6
---
You are a focused research agent. Your job is to investigate the assigned
topic thoroughly, then return a clean, structured summary. Do not perform
any write operations. Read and report only.
Three fields do the work: name (how it’s referenced), description (Claude’s routing signal), and the system prompt below the frontmatter (how the agent behaves).
The description field is the most important line you’ll write. If it’s vague, Claude won’t know when to use this agent and will handle the task in the main session instead. Make it specific: what kind of task, what it returns, when to trigger it. Compare:
Vague: “Research agent for looking things up”
Specific: “Competitive research specialist. Use when the task involves investigating 2+ companies, products, or market segments simultaneously. Returns a structured summary per target, no file writes.”
One naming caveat: avoid generic names that match Anthropic’s built-in agent keywords. Naming an agent code-reviewer can trigger pre-defined rules that silently override your custom system prompt. Use descriptive but specific names: competitive-research-agent, spec-synthesis-agent, content-audit-agent.
For 100+ ready-made configs across different use cases, VoltAgent’s awesome-claude-code-subagents repo is worth browsing before building from scratch.
Claude Code Subagent Troubleshooting: 5 Failure Modes and Fixes
Most subagent failures aren’t execution failures. They’re setup failures. The agent ran. It just didn’t do the right thing because the instructions weren’t tight enough.
Five failure modes that come up most often.
Dead agents
A subagent with no clear scope and no explicit output format will often burn through its full context on exploratory work and return nothing useful. It found a lot. It didn’t know what to surface.
The fix: define the deliverable before you send it. Not just “research X” but “research X and return a markdown table with these three columns.” Narrow scope plus defined output format prevents most dead agents.
Context eaten, nothing useful returned
Every subagent has a 200K-token context window. On a poorly scoped task, a subagent will fill that window with exploratory reads and return an empty summary. The window is gone. The main session has nothing.
This is the synthesis failure most tutorials don’t warn you about. Running parallel subagents is easy. Getting useful synthesis from their outputs is where things break.
Fix: the synthesis prompt matters as much as the individual task prompts. When subagents return, tell the main session exactly what to pull from each summary and how to combine them. “Look at these three summaries and extract: the one thing each does better than the others, the pricing comparison, and the gaps they all share.”
Over-spawning
Claude Opus 4.6 tends to spawn more subagents than necessary on complex tasks. On the Pro plan, this hits rate limits fast. Five parallel Opus agents running simultaneously is a reliable way to burn through your rate limit in 20 minutes or less.
Fix: for large parallel runs, set an upper bound in your prompt. “Use no more than three subagents for this.” Or handle it at the config level by pinning cost-sensitive agents to Haiku or Sonnet instead of Opus.
Auto-routing unreliable
Defined custom agents don’t always get used automatically. Claude sometimes handles the task in the main session even when a matching subagent exists. The mismatch is usually in the description field.
Fix: explicit invocation is more reliable than hoping for automatic routing. “Use the competitive-research-agent to investigate...” beats leaving the routing decision to Claude. Until automatic routing improves, treat explicit invocation as the default.
Name inference conflicts
Naming an agent with a generic label that matches Anthropic’s internal agent vocabulary can silently override your system prompt. Claude infers expected behavior from the name and applies default rules, ignoring your config.
Fix: use specific, descriptive names that describe the task context, not the role category. product-spec-researcher over researcher. feature-review-agent over code-reviewer.
The honest summary: subagents work best when the task is specific, the output format is defined, and the number is constrained. Broad tasks with vague deliverables are where they fail.
Once you understand why they fail, the starter configs below give you structures that avoid the most common traps from the first run.
Claude Code Subagent Starter Configs
Three minimal configs to copy directly. Each is under 15 lines of frontmatter plus a short system prompt. Start here, adjust for your workflow.
Research agent (read-only, web-enabled)
For competitive research, topic investigation, documentation review. No write access.
---
name: research-agent
description: >
Focused research specialist for competitive analysis, topic deep-dives,
and multi-source investigation. Use when a task involves reading,
searching, or compiling from multiple sources. Returns a structured
summary only — no file writes.
tools: WebSearch, WebFetch, Read, Grep, Glob
model: claude-sonnet-4-6
permissionMode: default
---
You are a research agent. Your job: investigate the assigned topic
thoroughly using available sources. Return a clean structured summary
with these sections: Key Findings, Gaps or Unknowns, Recommended Next
Steps. Do not write to any files. Read and report only.
Synthesis agent (receives summaries, produces final output)
For the main-session step after parallel agents return their summaries.
---
name: synthesis-agent
description: >
Synthesis specialist. Use after parallel research agents have returned
their summaries. Takes multiple research outputs and combines them into
a single structured document. Does not do new research.
tools: Read, Write
model: claude-opus-4-6
permissionMode: default
---
You receive research summaries from multiple agents. Your job: synthesize
them into one coherent document. Identify: what each source says that the
others don't, where they agree, where they conflict, and the most
important combined finding. Do not fetch new sources. Work only from
what you're given.
Explore variant (Haiku, fast, codebase search)
For file and codebase search tasks where speed and cost matter more than depth.
---
name: fast-explore
description: >
Fast file and codebase search. Use for finding files, locating
functions, or mapping a directory structure. Haiku model for speed
and cost efficiency. Read-only, no web access.
tools: Read, Grep, Glob, LS
model: claude-haiku-4-5-20251001
permissionMode: default
---
You are a fast search agent. Map, locate, and list. Return file paths,
function names, or structure summaries. Be concise. One finding per line
where possible.
All three are minimal by design. The system prompt and tools list are what you’ll adjust most. The one thing to keep specific: the description field. That’s Claude’s routing signal, and vague beats both configs and prompts for causing silent failures.
A full subagent template pack with pre-built configs for competitive research, content audit, and product spec workflows is coming in a future resource. For now, these three cover the most common starting patterns.
How to Get Started With Claude Code Subagents Today
Beginner: Open /agents in your next Claude Code session
Look at the three built-ins before building anything custom. Explore, Plan, and General-purpose are already there. Run a research task and ask Claude to “use the explore agent” to see what changes.
Intermediate: Try the dynamic spawning pattern on one real task
Give Claude a multi-part research question and tell it to use however many subagents it needs. Watch what it spawns, what it returns, and how long the synthesis takes. That first run shows you where your current prompting needs tightening.
Advanced: Build the synthesis layer
Once parallel research is running, spend effort on the final synthesis prompt. The quality of what the main session does with the subagent outputs determines the quality of the final result. That’s the lever most builders haven’t pulled yet.
Go Deeper:
Claude Code for Everyone: From Zero to Your First Project — start here if you’re still getting oriented with Claude Code
Best Claude Code Plugins: 10 Tested, 4 Worth Keeping — the other capability layer worth understanding alongside subagents
How to Do Research with AI — the research workflow that subagents plug into
The Computer Use comparison I mentioned at the start took under an hour with three parallel agents. Sequentially, it would have been a full afternoon. That delta is available to you right now, without setup.
If any of this is useful, share it with one builder who’s been doing research one tab at a time.
Frequently Asked Questions
Do Claude Code subagents cost extra?
No separate fee. They run on your existing Claude plan and consume tokens like any other Claude session. The cost multiplier is real though: five parallel subagents running Opus burn five times the tokens of one session. On the Pro plan, large parallel runs hit rate limits faster than most people expect.
Can I use subagents without knowing how to code?
Yes. The /agents command in Claude Code walks you through setup interactively and Claude drafts the config for you. The only thing you write is a description of what the agent does. No code required.
What’s the difference between a subagent and a Claude skill?
Skills are reusable prompt instructions stored in .claude/skills/. They tell Claude how to approach a task. Subagents are separate Claude instances with their own context and tools. Skills shape behavior. Subagents delegate work.
How many subagents can run at the same time?
No hard limit from Anthropic, but practical limits from rate caps apply. On Pro, keep parallel runs to three or four Opus agents for sustained work. Haiku-based agents are cheaper and have more headroom.
What happens when a subagent fails or goes silent?
The main session continues. Claude can tell you what the subagent returned, or you can ask it to retry with a refined task brief. Subagents can’t ask clarifying questions mid-run, so failures usually trace back to an underspecified task at the start.
Can I monitor what a subagent is doing while it runs?
Not in real time. You see the result when it finishes, not the intermediate steps. If you need visibility into the process, ask the subagent to include a “what I checked” section in its output format. That’s the closest thing to a progress log without watching it run.
Can I share my subagent configs with my team?
Yes. Configs stored in .claude/agents/ (project-level) are part of your repo and get shared when you commit them. Configs in ~/.claude/agents/ are personal and local. For team-shared agents, put them in the project directory and commit the .md files like any other project file.
What’s the first task you’d run in parallel if you set up a research subagent today?
— Jenny
Why Upgrade · Practical AI Builder Program · Templates · Builder Showcase







