Description
In the Dashboard v2 chat view, the per-message metadata bar shows incorrect context and cache statistics:
-
Context % always shows 0% — Even after extended conversations with many tool calls (50k+ tokens used out of 1M context), the ctx% indicator stays at 0% throughout the entire session.
-
Cache hit rate always shows 100% — From the very first message, the cache indicator shows 100%, which is not possible for the initial request (there is nothing to cache-read yet).
-
These incorrect values persist through the entire session lifecycle, including when compaction eventually triggers.
Expected Behavior
- Context % should reflect the actual
totalTokens / contextTokens ratio for the most recent API call
- Cache hit rate should start lower on first messages and increase as prompt caching warms up
Environment
- OpenClaw version: 2026.3.13
- Model: anthropic/claude-opus-4-6 (1M context)
- Channel: Telegram
- Dashboard: Control UI v2 (dashboard-v2 feature from v2026.3.12)
Root Cause Analysis
Looking at the Bd() function in dist/control-ui/assets/index-UvgeZ3yV.js:
// contextPercent calculation:
let l = t && n > 0 ? Math.min(Math.round(n / t * 100), 100) : null;
The function accumulates input tokens across all assistant messages (n += l.input ?? l.inputTokens ?? 0), then divides by contextTokens. This cumulative sum divided by the context window gives a meaningless number. For per-message display, it should use the individual message's input tokens, not the running total.
For the chat-level context indicator, the last message's usage.input (or the session's totalTokens from the session list API) divided by contextTokens would be the correct calculation.
The 0% display likely occurs when usage.input fields are missing or zero (falling through to null), and 100% cache occurs because cacheRead / (cacheRead + input) rounds to 100% when cached tokens vastly outnumber new tokens.
Screenshots
Context stays at 0% through entire conversation, then shows "API rate limit reached" and compaction — all while context was actually at ~6%.
Description
In the Dashboard v2 chat view, the per-message metadata bar shows incorrect context and cache statistics:
Context % always shows 0% — Even after extended conversations with many tool calls (50k+ tokens used out of 1M context), the ctx% indicator stays at 0% throughout the entire session.
Cache hit rate always shows 100% — From the very first message, the cache indicator shows 100%, which is not possible for the initial request (there is nothing to cache-read yet).
These incorrect values persist through the entire session lifecycle, including when compaction eventually triggers.
Expected Behavior
totalTokens / contextTokensratio for the most recent API callEnvironment
Root Cause Analysis
Looking at the
Bd()function indist/control-ui/assets/index-UvgeZ3yV.js:The function accumulates
inputtokens across all assistant messages (n += l.input ?? l.inputTokens ?? 0), then divides bycontextTokens. This cumulative sum divided by the context window gives a meaningless number. For per-message display, it should use the individual message's input tokens, not the running total.For the chat-level context indicator, the last message's
usage.input(or the session'stotalTokensfrom the session list API) divided bycontextTokenswould be the correct calculation.The 0% display likely occurs when
usage.inputfields are missing or zero (falling through tonull), and 100% cache occurs becausecacheRead / (cacheRead + input)rounds to 100% when cached tokens vastly outnumber new tokens.Screenshots
Context stays at 0% through entire conversation, then shows "API rate limit reached" and compaction — all while context was actually at ~6%.