Skip to content

fix(control-ui): use actual context size instead of cumulative input …#45913

Closed
neosun100 wants to merge 1 commit intoopenclaw:mainfrom
neosun100:fix/control-ui-context-percent
Closed

fix(control-ui): use actual context size instead of cumulative input …#45913
neosun100 wants to merge 1 commit intoopenclaw:mainfrom
neosun100:fix/control-ui-context-percent

Conversation

@neosun100
Copy link
Copy Markdown

…tokens for context% display

The context usage banner and per-message context% indicators both used cumulative input tokens (sum of all turns) divided by the context window size, which produced wildly inflated percentages (e.g. 89% when actual usage was 15%). This happened because each LLM turn includes the full context as input, so summing across turns double/triple/N-counts the context.

Bug fix (chat.ts): renderContextNotice now walks the message list backwards to find the last assistant message with usage data and uses its input/inputTokens value — which represents the true current context size — instead of the session-level cumulative inputTokens.

Bug fix (grouped-render.ts): extractGroupMeta now tracks lastInput (the most recent turn's input tokens) separately from the cumulative input sum, and uses it for contextPercent.

Enhancement: The context banner is now always visible with four color tiers instead of only appearing at ≥85%:

  • <50%: green (info icon) — plenty of room
  • 50–75%: blue (info icon) — moderate
  • 75–85%: amber (info icon) — getting full
  • ≥85%: red gradient (warning icon) — danger zone

Fixes #41503

Summary

Describe the problem and fix in 2–5 bullets:

  • Problem:
  • Why it matters:
  • What changed:
  • What did NOT change (scope boundary):

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

  • Closes #
  • Related #

User-visible / Behavior Changes

List user-visible changes (including defaults/config).
If none, write None.

Security Impact (required)

  • New permissions/capabilities? (Yes/No)
  • Secrets/tokens handling changed? (Yes/No)
  • New/changed network calls? (Yes/No)
  • Command/tool execution surface changed? (Yes/No)
  • Data access scope changed? (Yes/No)
  • If any Yes, explain risk + mitigation:

Repro + Verification

Environment

  • OS:
  • Runtime/container:
  • Model/provider:
  • Integration/channel (if any):
  • Relevant config (redacted):

Steps

Expected

Actual

Evidence

Attach at least one:

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording
  • Perf numbers (if relevant)

Human Verification (required)

What you personally verified (not just CI), and how:

  • Verified scenarios:
  • Edge cases checked:
  • What you did not verify:

Review Conversations

  • I replied to or resolved every bot review conversation I addressed in this PR.
  • I left unresolved only the conversations that still need reviewer or maintainer judgment.

If a bot review conversation is addressed by this PR, resolve that conversation yourself. Do not leave bot review conversation cleanup for maintainers.

Compatibility / Migration

  • Backward compatible? (Yes/No)
  • Config/env changes? (Yes/No)
  • Migration needed? (Yes/No)
  • If yes, exact upgrade steps:

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly:
  • Files/config to restore:
  • Known bad symptoms reviewers should watch for:

Risks and Mitigations

List only real risks for this PR. Add/remove entries as needed. If none, write None.

  • Risk:
    • Mitigation:

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 14, 2026

Greptile Summary

This PR fixes a real and impactful bug where context-usage indicators displayed wildly inflated percentages by summing input tokens across all conversation turns instead of using the current turn's actual context size. Both renderContextNotice in chat.ts and extractGroupMeta in grouped-render.ts are corrected to use the last assistant message's input/inputTokens value (the true context size for that turn) rather than a cumulative sum that grows without bound.

Changes:

  • chat.ts: renderContextNotice now accepts an optional messages array and walks it backwards to find the last assistant message with non-zero usage data, falling back to session.inputTokens only when no per-message data is available.
  • grouped-render.ts: extractGroupMeta introduces lastInput — the last non-zero per-turn input token count within a group — and uses it exclusively for contextPercent, while the cumulative input is preserved for the per-message token display.
  • Enhancement: The context banner is now always visible (previously only appeared at ≥85%) with four progressive color tiers (green/blue/amber/red), including a different icon for the danger zone.
  • Style issue: The bgOpacity formula has a discontinuity at the 75% boundary — opacity jumps from 0.06 to 0.14 exactly when the color also switches from blue to amber, producing a double visual jump at this threshold.

Confidence Score: 4/5

  • Safe to merge — the bug fix logic is correct and the only issue is a minor visual discontinuity in the background opacity calculation.
  • The core logic changes are sound: the backwards message walk correctly identifies the last assistant turn's input tokens, the type cast is safe at runtime due to optional chaining, and lastInput in grouped-render.ts is correctly scoped to replace only contextPercent while keeping cumulative tokens for the token display. The one deduction is the bgOpacity jump at the 75% threshold which causes a noticeable double visual step (opacity and color change simultaneously).
  • No files require special attention.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: ui/src/ui/views/chat.ts
Line: 317

Comment:
**Background opacity jumps discontinuously at the 75% boundary**

At `ratio = 0.749`, `bgOpacity = 0.06`. At `ratio = 0.750`, `bgOpacity = 0.08 + 0.08 * 0.75 = 0.14` — a 2.33× jump at the exact moment the color also transitions from blue to amber. This will produce a jarring visual step rather than a smooth progression.

The green→blue boundary (at 50%) is smooth since both zones share `0.06`. The amber→red boundary (at 85%) is smooth since both use the same formula. Only the blue→amber (75%) transition has this discontinuity.

Consider using the same formula from 50% onwards, or a lerp within each tier, e.g.:

```suggestion
  const bgOpacity = ratio < 0.5 ? 0.06 : Math.min(0.06 + 0.10 * ((ratio - 0.5) / 0.5), 0.16);
```

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: da48867

Comment thread ui/src/ui/views/chat.ts
const color = `rgb(${r}, ${g}, ${b})`;
const bgOpacity = 0.08 + 0.08 * t;
const bgOpacity = ratio < 0.75 ? 0.06 : 0.08 + 0.08 * Math.min(ratio, 1);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Background opacity jumps discontinuously at the 75% boundary

At ratio = 0.749, bgOpacity = 0.06. At ratio = 0.750, bgOpacity = 0.08 + 0.08 * 0.75 = 0.14 — a 2.33× jump at the exact moment the color also transitions from blue to amber. This will produce a jarring visual step rather than a smooth progression.

The green→blue boundary (at 50%) is smooth since both zones share 0.06. The amber→red boundary (at 85%) is smooth since both use the same formula. Only the blue→amber (75%) transition has this discontinuity.

Consider using the same formula from 50% onwards, or a lerp within each tier, e.g.:

Suggested change
const bgOpacity = ratio < 0.75 ? 0.06 : 0.08 + 0.08 * Math.min(ratio, 1);
const bgOpacity = ratio < 0.5 ? 0.06 : Math.min(0.06 + 0.10 * ((ratio - 0.5) / 0.5), 0.16);
Prompt To Fix With AI
This is a comment left during a code review.
Path: ui/src/ui/views/chat.ts
Line: 317

Comment:
**Background opacity jumps discontinuously at the 75% boundary**

At `ratio = 0.749`, `bgOpacity = 0.06`. At `ratio = 0.750`, `bgOpacity = 0.08 + 0.08 * 0.75 = 0.14` — a 2.33× jump at the exact moment the color also transitions from blue to amber. This will produce a jarring visual step rather than a smooth progression.

The green→blue boundary (at 50%) is smooth since both zones share `0.06`. The amber→red boundary (at 85%) is smooth since both use the same formula. Only the blue→amber (75%) transition has this discontinuity.

Consider using the same formula from 50% onwards, or a lerp within each tier, e.g.:

```suggestion
  const bgOpacity = ratio < 0.5 ? 0.06 : Math.min(0.06 + 0.10 * ((ratio - 0.5) / 0.5), 0.16);
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: da48867d91

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread ui/src/ui/views/chat.ts
@@ -1163,7 +1212,7 @@ export function renderChat(props: ChatProps) {

${renderFallbackIndicator(props.fallbackStatus)}
${renderCompactionIndicator(props.compactionStatus)}
${renderContextNotice(activeSession, props.sessions?.defaults?.contextTokens ?? null)}
${renderContextNotice(activeSession, props.sessions?.defaults?.contextTokens ?? null, props.messages as Array<{ message: Record<string, unknown> }>)}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Pass the actual message shape to context usage lookup

renderContextNotice now expects each entry to have a nested message field, but this call passes props.messages (raw chat message objects) and only type-casts them to that shape. In normal sessions this makes messages[idx].message undefined for every entry, so the backward scan never finds per-turn usage and the code falls back to cumulative session.inputTokens, leaving the context percentage inflated (and now always visible) instead of reflecting the latest turn’s true context size.

Useful? React with 👍 / 👎.

…tokens for context% display

The context usage banner and per-message context% indicators both used
cumulative input tokens (sum of all turns) divided by the context window
size, which produced wildly inflated percentages (e.g. 89% when actual
usage was 15%). This happened because each LLM turn includes the full
context as input, so summing across turns double/triple/N-counts the
context.

**Bug fix (chat.ts):** `renderContextNotice` now walks the message list
backwards to find the last assistant message with usage data and uses its
`input`/`inputTokens` value — which represents the true current context
size — instead of the session-level cumulative `inputTokens`.

**Bug fix (grouped-render.ts):** `extractGroupMeta` now tracks
`lastInput` (the most recent turn's input tokens) separately from the
cumulative `input` sum, and uses it for `contextPercent`.

**Enhancement:** The context banner is now always visible with four color
tiers instead of only appearing at ≥85%:
- <50%: green (info icon) — plenty of room
- 50–75%: blue (info icon) — moderate
- 75–85%: amber (info icon) — getting full
- ≥85%: red gradient (warning icon) — danger zone

Fixes openclaw#41503
@steipete
Copy link
Copy Markdown
Contributor

Closing this as implemented after Codex review.

PR #45913 is superseded. Current main already uses fresh Control UI context snapshots instead of cumulative input totals, and the maintainer-linked replacement landed on main as commit ebb08dc70e1fa0690e9f879699454500d1969762.

What I checked:

  • Maintainer-linked replacement is on main: The PR timeline includes a maintainer cross-reference to closed PR fix(ui): use current context usage in Control UI #71462, and git log on main shows ebb08dc70e1fa0690e9f879699454500d1969762 fix(ui): use current context usage in Control UI. (ebb08dc70e1f)
  • Current context notice uses fresh snapshot totals: getContextNoticeViewModel reads session.totalTokens, ignores stale values via totalTokensFresh, and no longer uses cumulative inputTokens for the notice percentage. (ui/src/ui/chat/context-notice.ts:64, ec8dbc459558)
  • Regression test proves cumulative input is ignored: The test passes inputTokens: 757_300 and totalTokens: 190_000, then asserts the rendered notice shows 190k / 200k and explicitly does not show 757.3k / 200k. (ui/src/ui/chat/context-notice.test.ts:32, ec8dbc459558)
  • Fresh context snapshots are wired through session updates: Session state updates preserve totalTokens and totalTokensFresh from websocket payloads, and tests cover both fresh updates and clearing stale totals. (ui/src/ui/controllers/sessions.test.ts:327, ec8dbc459558)
  • Changelog records the landed fix: The unreleased changelog says Control UI /usage now uses the fresh context snapshot for context percentage and includes cache-write tokens in the cache-hit denominator. (CHANGELOG.md:28, ec8dbc459558)

So I’m closing this as already implemented rather than keeping a duplicate issue open.

Review notes: reviewed against ec8dbc459558; fix evidence: commit ebb08dc70e1f.

@steipete steipete closed this Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants