fix(control-ui): use actual context size instead of cumulative input …#45913
fix(control-ui): use actual context size instead of cumulative input …#45913neosun100 wants to merge 1 commit intoopenclaw:mainfrom
Conversation
Greptile SummaryThis PR fixes a real and impactful bug where context-usage indicators displayed wildly inflated percentages by summing input tokens across all conversation turns instead of using the current turn's actual context size. Both Changes:
Confidence Score: 4/5
Prompt To Fix All With AIThis is a comment left during a code review.
Path: ui/src/ui/views/chat.ts
Line: 317
Comment:
**Background opacity jumps discontinuously at the 75% boundary**
At `ratio = 0.749`, `bgOpacity = 0.06`. At `ratio = 0.750`, `bgOpacity = 0.08 + 0.08 * 0.75 = 0.14` — a 2.33× jump at the exact moment the color also transitions from blue to amber. This will produce a jarring visual step rather than a smooth progression.
The green→blue boundary (at 50%) is smooth since both zones share `0.06`. The amber→red boundary (at 85%) is smooth since both use the same formula. Only the blue→amber (75%) transition has this discontinuity.
Consider using the same formula from 50% onwards, or a lerp within each tier, e.g.:
```suggestion
const bgOpacity = ratio < 0.5 ? 0.06 : Math.min(0.06 + 0.10 * ((ratio - 0.5) / 0.5), 0.16);
```
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: da48867 |
| const color = `rgb(${r}, ${g}, ${b})`; | ||
| const bgOpacity = 0.08 + 0.08 * t; | ||
| const bgOpacity = ratio < 0.75 ? 0.06 : 0.08 + 0.08 * Math.min(ratio, 1); |
There was a problem hiding this comment.
Background opacity jumps discontinuously at the 75% boundary
At ratio = 0.749, bgOpacity = 0.06. At ratio = 0.750, bgOpacity = 0.08 + 0.08 * 0.75 = 0.14 — a 2.33× jump at the exact moment the color also transitions from blue to amber. This will produce a jarring visual step rather than a smooth progression.
The green→blue boundary (at 50%) is smooth since both zones share 0.06. The amber→red boundary (at 85%) is smooth since both use the same formula. Only the blue→amber (75%) transition has this discontinuity.
Consider using the same formula from 50% onwards, or a lerp within each tier, e.g.:
| const bgOpacity = ratio < 0.75 ? 0.06 : 0.08 + 0.08 * Math.min(ratio, 1); | |
| const bgOpacity = ratio < 0.5 ? 0.06 : Math.min(0.06 + 0.10 * ((ratio - 0.5) / 0.5), 0.16); |
Prompt To Fix With AI
This is a comment left during a code review.
Path: ui/src/ui/views/chat.ts
Line: 317
Comment:
**Background opacity jumps discontinuously at the 75% boundary**
At `ratio = 0.749`, `bgOpacity = 0.06`. At `ratio = 0.750`, `bgOpacity = 0.08 + 0.08 * 0.75 = 0.14` — a 2.33× jump at the exact moment the color also transitions from blue to amber. This will produce a jarring visual step rather than a smooth progression.
The green→blue boundary (at 50%) is smooth since both zones share `0.06`. The amber→red boundary (at 85%) is smooth since both use the same formula. Only the blue→amber (75%) transition has this discontinuity.
Consider using the same formula from 50% onwards, or a lerp within each tier, e.g.:
```suggestion
const bgOpacity = ratio < 0.5 ? 0.06 : Math.min(0.06 + 0.10 * ((ratio - 0.5) / 0.5), 0.16);
```
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: da48867d91
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| @@ -1163,7 +1212,7 @@ export function renderChat(props: ChatProps) { | |||
|
|
|||
| ${renderFallbackIndicator(props.fallbackStatus)} | |||
| ${renderCompactionIndicator(props.compactionStatus)} | |||
| ${renderContextNotice(activeSession, props.sessions?.defaults?.contextTokens ?? null)} | |||
| ${renderContextNotice(activeSession, props.sessions?.defaults?.contextTokens ?? null, props.messages as Array<{ message: Record<string, unknown> }>)} | |||
There was a problem hiding this comment.
Pass the actual message shape to context usage lookup
renderContextNotice now expects each entry to have a nested message field, but this call passes props.messages (raw chat message objects) and only type-casts them to that shape. In normal sessions this makes messages[idx].message undefined for every entry, so the backward scan never finds per-turn usage and the code falls back to cumulative session.inputTokens, leaving the context percentage inflated (and now always visible) instead of reflecting the latest turn’s true context size.
Useful? React with 👍 / 👎.
…tokens for context% display The context usage banner and per-message context% indicators both used cumulative input tokens (sum of all turns) divided by the context window size, which produced wildly inflated percentages (e.g. 89% when actual usage was 15%). This happened because each LLM turn includes the full context as input, so summing across turns double/triple/N-counts the context. **Bug fix (chat.ts):** `renderContextNotice` now walks the message list backwards to find the last assistant message with usage data and uses its `input`/`inputTokens` value — which represents the true current context size — instead of the session-level cumulative `inputTokens`. **Bug fix (grouped-render.ts):** `extractGroupMeta` now tracks `lastInput` (the most recent turn's input tokens) separately from the cumulative `input` sum, and uses it for `contextPercent`. **Enhancement:** The context banner is now always visible with four color tiers instead of only appearing at ≥85%: - <50%: green (info icon) — plenty of room - 50–75%: blue (info icon) — moderate - 75–85%: amber (info icon) — getting full - ≥85%: red gradient (warning icon) — danger zone Fixes openclaw#41503
da48867 to
f6bdf7b
Compare
|
Closing this as implemented after Codex review. PR #45913 is superseded. Current What I checked:
So I’m closing this as already implemented rather than keeping a duplicate issue open. Review notes: reviewed against ec8dbc459558; fix evidence: commit ebb08dc70e1f. |
…tokens for context% display
The context usage banner and per-message context% indicators both used cumulative input tokens (sum of all turns) divided by the context window size, which produced wildly inflated percentages (e.g. 89% when actual usage was 15%). This happened because each LLM turn includes the full context as input, so summing across turns double/triple/N-counts the context.
Bug fix (chat.ts):
renderContextNoticenow walks the message list backwards to find the last assistant message with usage data and uses itsinput/inputTokensvalue — which represents the true current context size — instead of the session-level cumulativeinputTokens.Bug fix (grouped-render.ts):
extractGroupMetanow trackslastInput(the most recent turn's input tokens) separately from the cumulativeinputsum, and uses it forcontextPercent.Enhancement: The context banner is now always visible with four color tiers instead of only appearing at ≥85%:
Fixes #41503
Summary
Describe the problem and fix in 2–5 bullets:
Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
User-visible / Behavior Changes
List user-visible changes (including defaults/config).
If none, write
None.Security Impact (required)
Yes/No)Yes/No)Yes/No)Yes/No)Yes/No)Yes, explain risk + mitigation:Repro + Verification
Environment
Steps
Expected
Actual
Evidence
Attach at least one:
Human Verification (required)
What you personally verified (not just CI), and how:
Review Conversations
If a bot review conversation is addressed by this PR, resolve that conversation yourself. Do not leave bot review conversation cleanup for maintainers.
Compatibility / Migration
Yes/No)Yes/No)Yes/No)Failure Recovery (if this breaks)
Risks and Mitigations
List only real risks for this PR. Add/remove entries as needed. If none, write
None.