Skip to content

fix(ui): Chat UI shows correct token count instead of inflated cumulative value#46766

Closed
spacelobster88 wants to merge 1 commit intoopenclaw:mainfrom
spacelobster88:fix/46632-chat-ui-token-count
Closed

fix(ui): Chat UI shows correct token count instead of inflated cumulative value#46766
spacelobster88 wants to merge 1 commit intoopenclaw:mainfrom
spacelobster88:fix/46632-chat-ui-token-count

Conversation

@spacelobster88
Copy link
Copy Markdown

Summary

  • Chat UI context notice used session.inputTokens (cumulative across all API calls in a run) instead of session.totalTokens (derived from last API call, reflecting actual context window size)
  • This caused inflated display like "100% context used 647.6k / 200k" when actual usage was 173k/200k (87%)
  • Fixed both renderContextNotice in chat.ts and the /usage slash command in slash-command-executor.ts to use totalTokens

Fixes #46632

Test plan

  • Existing browser test updated to include totalTokens in fixture data
  • New test: verifies context notice shows totalTokens (173k) not inputTokens (647.6k) when they differ
  • New test: verifies context notice is hidden when totalTokens is below 85% threshold even if inputTokens would exceed it
  • Slash command /usage test updated to reflect corrected percentage calculation
  • All tests pass (3 browser tests, 13 node tests)

Note

AI-assisted fix

🤖 Generated with Claude Code

…play (openclaw#46632)

The Chat UI was using session.inputTokens (cumulative across all API
calls in a run) to display context usage, causing inflated values like
"647.6k / 200k" when actual context was 173k. The Sessions page
correctly used totalTokens, which is derived from the last API call and
reflects the true current context window size.

Changed both renderContextNotice and the /usage slash command to use
totalTokens for context percentage calculation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 15, 2026

Greptile Summary

This PR fixes a context window display bug (#46632) where the chat UI and /usage slash command were showing inflated token counts by reading session.inputTokens (cumulative across all API calls in a run) instead of session.totalTokens (the actual current context window size from the last API call). It also introduces slash-command-executor.ts as a new dedicated module for slash command logic and adds a comprehensive test suite covering both the browser context-notice rendering and the node-side slash command execution.

Key changes:

  • chat.tsrenderContextNotice now reads session?.totalTokens instead of session?.inputTokens, fixing the "100% / 200k" false positive when cumulative tokens exceeded the context limit but actual context usage did not.
  • slash-command-executor.ts — New file; executeUsage computes the context percentage from totalTokens via const total = session.totalTokens ?? input + output.
  • Tests — New browser tests validate the 85% threshold and the totalTokens-vs-inputTokens distinction; new node tests cover all slash command paths.

Minor UX concern: In the /usage output, input and output remain cumulative counters while total now reflects totalTokens (context window). In a high-turn session (e.g. inputTokens: 647k, totalTokens: 173k) the display will show "Input: 647.6k / Total: 173k", where Total is smaller than Input alone. Renaming the label (e.g. to "Context window") or separating the cumulative total from the context-window value would eliminate the confusion.

Confidence Score: 4/5

  • Safe to merge; the core logic fix is correct and well-tested, with only a minor label-consistency concern in the /usage output.
  • The fundamental fix (switching from cumulative inputTokens to context-window totalTokens) is straightforward and correct. Three new browser tests and thirteen node tests cover the key scenarios including the divergent-token regression case. The one style concern is that the "Total" label in /usage can read as less than "Input" in high-turn sessions, which is confusing but not a functional error. No breaking changes to existing interfaces are introduced.
  • ui/src/ui/chat/slash-command-executor.ts lines 720–724 — the "Total" display label ambiguity noted above.

Comments Outside Diff (1)

  1. ui/src/ui/chat/slash-command-executor.ts, line 720-724 (link)

    total label is misleading when totalTokens < inputTokens

    After this fix, the /usage output can display a confusing result like:

    Input: 647.6k tokens   ← cumulative across all API calls
    Output: 50k tokens
    Total: 173k tokens     ← current context window (less than Input!)
    Context: 87% of 200k
    

    input and output are still cumulative values while total now reflects the current context window size. When they diverge significantly (the exact bug scenario this PR fixes), the "Total" line will be smaller than "Input" alone, which is unintuitive. Consider either:

    • Renaming the label to something like "Context Window" or "Context Used" to distinguish it from a cumulative total, or
    • Using session.inputTokens + session.outputTokens for the "Total" display line and keeping totalTokens only for the pct context percentage calculation.

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix All With AI
This is a comment left during a code review.
Path: ui/src/ui/chat/slash-command-executor.ts
Line: 720-724

Comment:
**`total` label is misleading when `totalTokens` < `inputTokens`**

After this fix, the `/usage` output can display a confusing result like:

```
Input: 647.6k tokens   ← cumulative across all API calls
Output: 50k tokens
Total: 173k tokens     ← current context window (less than Input!)
Context: 87% of 200k
```

`input` and `output` are still cumulative values while `total` now reflects the current context window size. When they diverge significantly (the exact bug scenario this PR fixes), the "Total" line will be smaller than "Input" alone, which is unintuitive. Consider either:
- Renaming the label to something like "Context Window" or "Context Used" to distinguish it from a cumulative total, or
- Using `session.inputTokens + session.outputTokens` for the "Total" display line and keeping `totalTokens` only for the `pct` context percentage calculation.

```suggestion
    const input = session.inputTokens ?? 0;
    const output = session.outputTokens ?? 0;
    const total = session.totalTokens ?? input + output;
    const ctx = session.contextTokens ?? 0;
    const pct = ctx > 0 ? Math.round((total / ctx) * 100) : null;
```

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: 9e9b673

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9e9b6738c2

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread ui/src/ui/views/chat.ts
Comment on lines +1441 to 1444
if (i < tools.length) {
items.push({
kind: "message",
key: messageKey(tools[i], i + history.length),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep tool cards hidden when thinking mode is off

The new interleaving loop always appends toolMessages, but the showThinking guard now only filters history messages, so tool call/result cards still render when users disable thinking output. In sessions that emit tool events, this regresses the previous behavior (where tool cards were gated behind showThinking) and can expose verbose internal output in normal chat mode.

Useful? React with 👍 / 👎.

Comment on lines +465 to +467
return isSubagentSessionKey(currentSessionKey)
? candidateSessionKey.startsWith(`${currentSessionKey}:subagent:`)
: false;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle legacy /kill fallback for main-session roots

The compatibility fallback for missing spawnedBy data only works when the current session is itself a subagent; for main/agent:<id>:main it always returns false. On older gateways (the case called out in the comment), /kill all from a main session will therefore match nothing and fail to abort any subagent runs.

Useful? React with 👍 / 👎.

@steipete
Copy link
Copy Markdown
Contributor

Closing this as implemented after Codex review.

Current main already implements the token-usage fix this PR proposes, with targeted regression tests for both the chat context notice and /usage. The open PR is superseded by code now present on main and by the later cross-referenced PR activity.

What I checked:

  • Chat context notice now uses current context snapshot: getContextNoticeViewModel reads session.totalTokens and session.contextTokens, and explicitly suppresses stale snapshots via totalTokensFresh === false, so the notice is based on current context usage rather than cumulative input tokens. (ui/src/ui/chat/context-notice.ts:64, 0bef73d151bd)
  • Regression test covers the inflated-token scenario: The test sets inputTokens: 757_300, totalTokens: 190_000, contextTokens: 200_000 and asserts the UI shows 95% context used, 190k / 200k, and does not show 757.3k / 200k. (ui/src/ui/chat/context-notice.test.ts:32, 0bef73d151bd)
  • Chat view renders the shared context notice helper: The active chat view calls renderContextNotice(activeSession, ...), so the corrected total-token logic is wired into the actual Control UI chat surface. (ui/src/ui/views/chat.ts:1215, 0bef73d151bd)
  • /usage now computes context from totalTokens while preserving cumulative total display: executeUsage keeps Total as cumulative input + output, but computes Context percentage from session.totalTokens when fresh. That matches the bug report and also addresses the prior review note that a Total label based on totalTokens could become misleading. (ui/src/ui/chat/slash-command-executor.ts:380, 0bef73d151bd)
  • Node test locks the /usage behavior when cumulative and current-context totals differ: The test case with inputTokens: 1200, outputTokens: 300, totalTokens: 1250, contextTokens: 4000 expects Total: 1.5k and Context: 31% of 4k, proving current main intentionally distinguishes cumulative totals from current context usage. (ui/src/ui/chat/slash-command-executor.node.test.ts:528, 0bef73d151bd)
  • PR timeline shows a later landed replacement: The provided GitHub timeline for this PR includes a cross-reference to closed PR fix(ui): use current context usage in Control UI #71462, titled fix(ui): use current context usage in Control UI, on 2026-04-25.

So I’m closing this as already implemented rather than keeping a duplicate issue open.

Review notes: reviewed against 0bef73d151bd.

@steipete steipete closed this Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Chat UI shows incorrect token count (647.6k vs actual 173k)

2 participants