Skip to content

fix(cache): improve Anthropic prompt cache hit rate with system split and tool stability#14743

Open
bhagirathsinh-vaghela wants to merge 6 commits intoanomalyco:devfrom
bhagirathsinh-vaghela:prompt-caching
Open

fix(cache): improve Anthropic prompt cache hit rate with system split and tool stability#14743
bhagirathsinh-vaghela wants to merge 6 commits intoanomalyco:devfrom
bhagirathsinh-vaghela:prompt-caching

Conversation

@bhagirathsinh-vaghela
Copy link

@bhagirathsinh-vaghela bhagirathsinh-vaghela commented Feb 23, 2026

Issue for this PR

Closes #5416, #5224
Related: #14065, #5422, #14203

Type of change

  • Bug fix

What does this PR do?

Fixes cross-repo and cross-session Anthropic prompt cache misses. Same-session caching already works (AI SDK places markers correctly). This PR fixes the cases where the prefix changes between repos, sessions, or process restarts — causing full cache writes on every first prompt.

Anthropic hashes tools → system → messages in prefix order. Any change to an earlier block invalidates everything after it. OpenCode has several sources of unnecessary prefix changes.

Terminology (1-indexed): S1/S2 = system block 1/2. M1/M2 = cache marker on S1/S2.

Always-active fixes:

  1. System prompt is a single block — dynamic content (env, project AGENTS.md) invalidates the stable provider prompt. Split into 2 blocks: stable (provider prompt + global AGENTS.md) first, dynamic (env + project) second.

  2. Bash tool schema includes Instance.directory — changes per-repo, invalidating tool hash. Removed; model gets cwd from the environment block.

  3. Skill tool ordering is nondeterministicObject.values() on glob results. Sorted by name.

Opt-in fixes (behind env var flags):

  1. Date and instructions change between turnsOPENCODE_EXPERIMENTAL_CACHE_STABILIZATION=1 freezes date and caches instruction file reads for the process lifetime.

  2. Extended cache TTLOPENCODE_EXPERIMENTAL_CACHE_1H_TTL=1 sets 1h TTL on M1 (2x write cost vs 1.25x for default 5-min). Useful for sessions with idle gaps.

Commits:

# What Behind flag?
1 Cache token audit logging OPENCODE_CACHE_AUDIT
2 Stabilize system prefix (freeze date + instructions) OPENCODE_EXPERIMENTAL_CACHE_STABILIZATION
3 Split system prompt into 2 blocks Always active
4 Remove cwd from bash tool schema Always active
5 Sort skill tool ordering Always active
6 Optional 1h TTL on M1 OPENCODE_EXPERIMENTAL_CACHE_1H_TTL

What this doesn't fix:

  • Per-project skills or MCP tools that differ across repos — the skill tool description changes per project, breaking M1 even on the same machine. This is expected; per-project tools are inherently dynamic.
  • Cross-machine cache sharing (different skill tool descriptions per machine)
  • Plan/build mode switches (TaskTool description changes per mode) — deferred
  • Compaction cache alignment (/compact doesn't utilize prompt caching #10342 — planned follow-up)

Impact beyond Anthropic: The prefix stability fixes also benefit providers with automatic prefix caching (OpenAI, DeepSeek, Gemini, xAI, Groq) — no markers needed, just a stable prefix.

How did you verify your code works?

OPENCODE_CACHE_AUDIT=1 logs [CACHE] hit/miss per LLM call. Tested with Claude Sonnet 4.6 on Anthropic direct API, bun dev, Feb 23 2026.

Cross-repo (different folder, within 5-min TTL — the key improvement):

BEFORE (no fixes):

Prompt 1: hit=0.0%   read=0      write=17,786  new=3   (full miss, no reuse)
Prompt 2: hit=99.9%  read=17,786 write=10      new=3
Prompt 3: hit=99.9%  read=17,796 write=14      new=3

AFTER (system split + tool stability):

Prompt 1: hit=97.6%  read=17,345 write=428     new=3   (block 1 reused, only env block misses)
Prompt 2: hit=99.9%  read=17,773 write=10      new=3
Prompt 3: hit=99.9%  read=17,783 write=14      new=3

The first prompt in a new repo goes from 0% → 97.6% cache hit. S1 (tools + provider prompt + global AGENTS.md) is reused across repos. These numbers are based on my setup — S1 is ~17,345 tokens, mostly tool definitions (~12k tokens), with provider prompt (~2k) and global AGENTS.md (~2.8k) making up the rest. Your numbers will differ based on your tool set (MCP servers, skills) and global AGENTS.md size, but the cross-repo miss is eliminated regardless.

Only block 2 (env with different cwd = 428 tokens) is a cache write on the first prompt in a new repo.

To reproduce:

OPENCODE_CACHE_AUDIT=1 bun dev /tmp/folder-a
# send a prompt, exit
OPENCODE_CACHE_AUDIT=1 bun dev /tmp/folder-b
# send a prompt within 5 minutes
grep '\[CACHE\]' ~/.local/share/opencode/log/dev.log

Screenshots / recordings

N/A — no UI changes.

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Feb 23, 2026
@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Potential related PRs found:

  1. fix: split system prompt into 2 messages for proper Anthropic prompt caching #14203 - "fix: split system prompt into 2 messages for proper Anthropic prompt caching"

  2. feat(provider): add provider-specific cache configuration system (significant token usage reduction) #5422 - "feat(provider): add provider-specific cache configuration system (significant token usage reduction)"

  3. feat: add experimental compaction prompt and preserve prefix support #11492 - "feat: add experimental compaction prompt and preserve prefix support"

Note: PR #14203 appears to be the most directly related, as it's specifically about the system prompt splitting strategy that is a key component of PR #14743's improvements.

@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Feb 23, 2026
@github-actions
Copy link
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

@bhagirathsinh-vaghela
Copy link
Author

bhagirathsinh-vaghela commented Feb 23, 2026

Reviewer's guide — supplementary context not covered in the PR description. Uses same terminology (S1/S2, M1/M2) defined there.


AI SDK cache marker mechanics

Ref: Anthropic prompt caching docs | Anthropic engineers' caching best practices (Feb 19 2026): Thariq Shihipar, R. Lance Martin

Max 4 cache_control markers per request. The AI SDK already places markers on the first 2 system blocks and the last 2 conversation turns. That part works — the problem is OpenCode mutating blocks before these markers, cascading hash changes downstream.

Key subtlety: before this PR, OpenCode had a single system block. M1 covered it, but M2 was unused — it fell through to conversation. The system split (commit 3) is what activates both markers, letting S1 (stable) cache independently from S2 (dynamic).

Since M1 covers the tool block too (tools hash before system in Anthropic's ordering), any tool instability (commits 4–5) completely invalidates M1 — the entire cached prefix up to that marker is lost.

Related open PRs

Several open PRs address parts of this (#5422, #14203, #10380, #11492). This PR addresses the root causes directly.

@bhagirathsinh-vaghela
Copy link
Author

bhagirathsinh-vaghela commented Feb 23, 2026

CI failure seems pre-existing — same NotFoundError affecting all PRs since the Windows path fixes landed in dev. Unrelated to this PR. All other checks pass.

@ShanePresley
Copy link

I pulled this into my fork and it's working beautifully. Unfortunately I only found this after getting a huge bill from Anthropic. Thanks OpenCode!

@TomLucidor
Copy link

TomLucidor commented Feb 24, 2026

@bhagirathsinh-vaghela could you check this with SLMs like Qwen3 or Nemotron or Kimi-Linear or GPT-OSS? Or providers using the OpenAI-compatible APIs (e.g. OpenRouter)?
Also why are some of the E2E tests failing in OpenCode PR?

Bonus ask: would Speculative Decoding work with this fork? I am looking at this from the lens of vLLM-MLX and MLX-OpenAI-Server (for non-MLX there is vLLM).

@bhagirathsinh-vaghela
Copy link
Author

bhagirathsinh-vaghela commented Mar 2, 2026

@bhagirathsinh-vaghela could you check this with SLMs like Qwen3 or Nemotron or Kimi-Linear or GPT-OSS? Or providers using the OpenAI-compatible APIs (e.g. OpenRouter)? Also why are some of the E2E tests failing in OpenCode PR?

Bonus ask: would Speculative Decoding work with this fork? I am looking at this from the lens of vLLM-MLX and MLX-OpenAI-Server (for non-MLX there is vLLM).

@TomLucidor

The fixes are provider/model-agnostic — they stabilize the request prefix so it is byte-for-byte identical across calls. Any provider with server-side prefix caching benefits automatically. See my reviewer's guide comment above for the full breakdown of each fix.

The specific model behind the provider does not matter — the changes are purely at the request layer. You can verify with any provider using OPENCODE_CACHE_AUDIT=1 to see hit/miss per call.

E2E failures — pre-existing upstream issue, since fixed. CI is green now.

Speculative decoding — orthogonal. This PR only changes what is sent in the request, not how the server processes it.

` Is directory a git repo: ${project.vcs === "git" ? "yes" : "no"}`,
` Platform: ${process.platform}`,
` Today's date: ${new Date().toDateString()}`,
` Today's date: ${date.toDateString()}`,
Copy link

@kamelkace kamelkace Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to change the wording here, to hint to the LLM that this isn't a live updating value? Otherwise it might make some weird choices elsewhere for long lived conversations. E.g.

Suggested change
` Today's date: ${date.toDateString()}`,
` Session started at: ${date.toDateString()}`,

Copy link
Author

@bhagirathsinh-vaghela bhagirathsinh-vaghela Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point — this is better to show when the date is frozen. I'm keeping Today's date in this PR for now since it's what all OpenCode users expect(at least by experience even if they are not aware), but I'm not against the change if maintainers agree.

Separately, I've been experimenting locally with a progressive disclosure approach — making the env block fully static, instructing the model to fetch cwd, date, platform, etc. via tool calls when needed. Eliminates the block 2 cache write entirely at the cost of an occasional extra round-trip.

Interesting finding in this approach: completely removing the env block tended to result in models not bothering to fetch the info at all and assume things which is non deterministic. A static block with explicit "figure out when needed" instructions worked much better, at least with Anthropic models.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separately, I've been experimenting locally with a progressive disclosure approach — making the env block fully static, instructing the model to fetch cwd, date, platform, etc. via tool calls when needed. [...] A static block with explicit "figure out when needed" instructions worked much better, at least with Anthropic models.

Hmm! I'll have to give that a shot when I patch from this PR later; I'm running locally against one of the Qwen3.5 models, so it'll be interesting data to see how they respond.

alexsmirnov added a commit to alexsmirnov/opencode that referenced this pull request Mar 5, 2026
@fkroener
Copy link

fkroener commented Mar 8, 2026

Looking forward to seeing less prompt re-processing with opencode. Unfortunately it seems currently this patchset breaks llama.cpp support:

[60919] srv operator(): got exception: {"error":{"code":400,"message":"Unable to generate parser for this template. Automatic parser generation failed: \n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"invalid_request_error"}}

Tested with and without the new autoparser. Maybe I'm using it wrong?

@fkroener
Copy link

fkroener commented Mar 9, 2026

So, after partially reverting fix(cache): split system prompt into 2 blocks for independent caching, or rather naively ensuring llama.cpp gets just one system prompt (revert.patch) opencode now flies with this patchset using a llama.cpp endpoint (openai api though).

No more "erased invalidated context checkpoint" for all checkpoints and reprocessing of the entire context seemingly whenever I send a new query.

Checkpoint reuse happens usually at around 99 %, sometimes drops to 93 % - lowest was in the 70 % with > 60k tokens.

Much appreciated!

Wonder whether the split system message is something @pwilkin would be willing to support or whether it should be guarded to only be sent to Antrophic endpoints.

@pwilkin
Copy link

pwilkin commented Mar 9, 2026

Any chance the system message could be moved to the top of the messages list? We could possibly do this for the Anthropic API, but technically the system prompt should be the first message.

@fkroener
Copy link

fkroener commented Mar 9, 2026

Thanks @pwilkin. Given this is actually coming from the model template (Qwen 3.5) and not the parser:

    {%- if message.role == "system" %}
        {%- if not loop.first %}
            {{- raise_exception('System message must be at the beginning.') }}
        {%- endif %}
    {%- elif message.role == "user" %}

this should probably best be handled on OpenCode's end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Anthropic (and others) caching improvement

6 participants