fix(ui): Chat UI shows correct token count instead of inflated cumulative value#46766
fix(ui): Chat UI shows correct token count instead of inflated cumulative value#46766spacelobster88 wants to merge 1 commit intoopenclaw:mainfrom
Conversation
…play (openclaw#46632) The Chat UI was using session.inputTokens (cumulative across all API calls in a run) to display context usage, causing inflated values like "647.6k / 200k" when actual context was 173k. The Sessions page correctly used totalTokens, which is derived from the last API call and reflects the true current context window size. Changed both renderContextNotice and the /usage slash command to use totalTokens for context percentage calculation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greptile SummaryThis PR fixes a context window display bug (#46632) where the chat UI and Key changes:
Minor UX concern: In the Confidence Score: 4/5
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9e9b6738c2
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (i < tools.length) { | ||
| items.push({ | ||
| kind: "message", | ||
| key: messageKey(tools[i], i + history.length), |
There was a problem hiding this comment.
Keep tool cards hidden when thinking mode is off
The new interleaving loop always appends toolMessages, but the showThinking guard now only filters history messages, so tool call/result cards still render when users disable thinking output. In sessions that emit tool events, this regresses the previous behavior (where tool cards were gated behind showThinking) and can expose verbose internal output in normal chat mode.
Useful? React with 👍 / 👎.
| return isSubagentSessionKey(currentSessionKey) | ||
| ? candidateSessionKey.startsWith(`${currentSessionKey}:subagent:`) | ||
| : false; |
There was a problem hiding this comment.
Handle legacy /kill fallback for main-session roots
The compatibility fallback for missing spawnedBy data only works when the current session is itself a subagent; for main/agent:<id>:main it always returns false. On older gateways (the case called out in the comment), /kill all from a main session will therefore match nothing and fail to abort any subagent runs.
Useful? React with 👍 / 👎.
|
Closing this as implemented after Codex review. Current What I checked:
So I’m closing this as already implemented rather than keeping a duplicate issue open. Review notes: reviewed against 0bef73d151bd. |
Summary
session.inputTokens(cumulative across all API calls in a run) instead ofsession.totalTokens(derived from last API call, reflecting actual context window size)renderContextNoticeinchat.tsand the/usageslash command inslash-command-executor.tsto usetotalTokensFixes #46632
Test plan
totalTokensin fixture datatotalTokens(173k) notinputTokens(647.6k) when they differtotalTokensis below 85% threshold even ifinputTokenswould exceed it/usagetest updated to reflect corrected percentage calculationNote
AI-assisted fix
🤖 Generated with Claude Code