fix(webchat): use totalTokens for context utilization display#51536
fix(webchat): use totalTokens for context utilization display#51536VipinSinghChauhan wants to merge 6 commits intoopenclaw:mainfrom
Conversation
Fixes openclaw#51507 The context warning banner and /status command were reading session.inputTokens to compute context fill %, but inputTokens accumulates input tokens across ALL API sub-calls in a run (tool-use loops, retries, compaction). With models like MiniMax-M2.5 that chain many sub-calls, this number far exceeds the actual context size, causing premature 100% warnings and a blocked chat input. The correct field is session.totalTokens, which the backend already derives from lastCallUsage (the final API call only) — the true prompt-size snapshot. This is the same value the TUI uses, which explains why TUI showed accurate context while WebChat did not. Changes: - ui/views/chat.ts: renderContextNotice uses totalTokens (not inputTokens) - ui/chat/slash-command-executor.ts: /status Context % uses totalTokens - ui/views/chat.browser.test.ts: update fixture to totalTokens so the existing context notice test continues to pass
Greptile SummaryThis PR fixes a context-utilization display bug in WebChat where Changes:
Issue: Confidence Score: 3/5
Prompt To Fix All With AIThis is a comment left during a code review.
Path: ui/src/ui/chat/slash-command-executor.ts
Line: 310-311
Comment:
**Existing node test will break with new percentage**
`slash-command-executor.node.test.ts` (line ~325) was not updated and now produces the wrong expected value. The fixture already has `inputTokens: 1200`, `totalTokens: 1500`, and `contextTokens: 4000`. The old code computed context % from `inputTokens`: `1200 / 4000 = 30%`, which matches what the test asserts. With this change, `contextUsed = session.totalTokens = 1500`, so the new result is `Math.round(1500 / 4000 * 100) = Math.round(37.5) = 38%` — but the test still expects `Context: **30%** of 4k`.
The fix is to update the expected string in `slash-command-executor.node.test.ts`:
```
"**Session Usage**\nInput: **1.2k** tokens\nOutput: **300** tokens\nTotal: **1.5k** tokens\nContext: **38%** of 4k\nModel: `gpt-4.1-mini`"
```
(or align the fixture's `totalTokens` to a value that yields the same 30% to minimize test delta).
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: "fix(webchat): use to..." |
…kens The /usage command now uses totalTokens (1500) instead of inputTokens (1200) for context %, so the expected output changes from 30% to 38% (1500 / 4000 = 37.5% → rounds to 38%).
|
Closing this as implemented after Codex review. Current What I checked:
So I’m closing this as already implemented rather than keeping a duplicate issue open. Review notes: reviewed against 42514156e03c; fix evidence: commit 42514156e03c. |
Fixes #51507
The context warning banner and /status command were reading session.inputTokens to compute context fill %, but inputTokens accumulates input tokens across ALL API sub-calls in a run (tool-use loops, retries, compaction). With models like MiniMax-M2.5 that chain many sub-calls, this number far exceeds the actual context size, causing premature 100% warnings and a blocked chat input.
The correct field is session.totalTokens, which the backend already derives from lastCallUsage (the final API call only) — the true prompt-size snapshot. This is the same value the TUI uses, which explains why TUI showed accurate context while WebChat did not.
Changes:
Summary
Describe the problem and fix in 2–5 bullets:
Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
User-visible / Behavior Changes
List user-visible changes (including defaults/config).
If none, write
None.Security Impact (required)
Yes/No)Yes/No)Yes/No)Yes/No)Yes/No)Yes, explain risk + mitigation:Repro + Verification
Environment
Steps
Expected
Actual
Evidence
Attach at least one:
Human Verification (required)
What you personally verified (not just CI), and how:
Review Conversations
If a bot review conversation is addressed by this PR, resolve that conversation yourself. Do not leave bot review conversation cleanup for maintainers.
Compatibility / Migration
Yes/No)Yes/No)Yes/No)Failure Recovery (if this breaks)
Risks and Mitigations
List only real risks for this PR. Add/remove entries as needed. If none, write
None.