fix stale Control UI context warning math#49113
Conversation
Greptile SummaryThis PR fixes a false-positive "100% context used" warning in the chat view by switching from the accumulated
Confidence Score: 4/5
Prompt To Fix All With AIThis is a comment left during a code review.
Path: ui/src/ui/views/chat.test.ts
Line: 252-307
Comment:
**Missing positive-case test for context warning**
Both new tests only verify that the warning is *suppressed*. There is no test asserting that the warning *does* appear when `totalTokensFresh` is `true` (or unset) and `totalTokens` is above the 85% threshold. Without a positive case, a future regression that accidentally sets `used = 0` unconditionally would go undetected.
Consider adding a test like:
```ts
it("shows the context warning when totalTokensFresh is true and total tokens exceed threshold", () => {
const container = document.createElement("div");
render(
renderChat(
createProps({
sessions: {
ts: 0,
path: "",
count: 1,
defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
sessions: [
{
key: "main",
kind: "direct",
updatedAt: null,
totalTokens: 115_000,
totalTokensFresh: true,
contextTokens: 128_000,
},
],
},
}),
),
container,
);
expect(container.textContent).toContain("context used");
});
```
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: 78281d2 |
| it("does not show the context warning from accumulated input tokens when current context is below threshold", () => { | ||
| const container = document.createElement("div"); | ||
| render( | ||
| renderChat( | ||
| createProps({ | ||
| sessions: { | ||
| ts: 0, | ||
| path: "", | ||
| count: 1, | ||
| defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 }, | ||
| sessions: [ | ||
| { | ||
| key: "main", | ||
| kind: "direct", | ||
| updatedAt: null, | ||
| inputTokens: 128_000, | ||
| totalTokens: 64_000, | ||
| contextTokens: 128_000, | ||
| }, | ||
| ], | ||
| }, | ||
| }), | ||
| ), | ||
| container, | ||
| ); | ||
|
|
||
| expect(container.textContent).not.toContain("context used"); | ||
| }); | ||
|
|
||
| it("hides the context warning when the cached total token snapshot is stale", () => { | ||
| const container = document.createElement("div"); | ||
| render( | ||
| renderChat( | ||
| createProps({ | ||
| sessions: { | ||
| ts: 0, | ||
| path: "", | ||
| count: 1, | ||
| defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 }, | ||
| sessions: [ | ||
| { | ||
| key: "main", | ||
| kind: "direct", | ||
| updatedAt: null, | ||
| totalTokens: 120_000, | ||
| totalTokensFresh: false, | ||
| contextTokens: 128_000, | ||
| }, | ||
| ], | ||
| }, | ||
| }), | ||
| ), | ||
| container, | ||
| ); | ||
|
|
||
| expect(container.textContent).not.toContain("context used"); |
There was a problem hiding this comment.
Missing positive-case test for context warning
Both new tests only verify that the warning is suppressed. There is no test asserting that the warning does appear when totalTokensFresh is true (or unset) and totalTokens is above the 85% threshold. Without a positive case, a future regression that accidentally sets used = 0 unconditionally would go undetected.
Consider adding a test like:
it("shows the context warning when totalTokensFresh is true and total tokens exceed threshold", () => {
const container = document.createElement("div");
render(
renderChat(
createProps({
sessions: {
ts: 0,
path: "",
count: 1,
defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
sessions: [
{
key: "main",
kind: "direct",
updatedAt: null,
totalTokens: 115_000,
totalTokensFresh: true,
contextTokens: 128_000,
},
],
},
}),
),
container,
);
expect(container.textContent).toContain("context used");
});Prompt To Fix With AI
This is a comment left during a code review.
Path: ui/src/ui/views/chat.test.ts
Line: 252-307
Comment:
**Missing positive-case test for context warning**
Both new tests only verify that the warning is *suppressed*. There is no test asserting that the warning *does* appear when `totalTokensFresh` is `true` (or unset) and `totalTokens` is above the 85% threshold. Without a positive case, a future regression that accidentally sets `used = 0` unconditionally would go undetected.
Consider adding a test like:
```ts
it("shows the context warning when totalTokensFresh is true and total tokens exceed threshold", () => {
const container = document.createElement("div");
render(
renderChat(
createProps({
sessions: {
ts: 0,
path: "",
count: 1,
defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
sessions: [
{
key: "main",
kind: "direct",
updatedAt: null,
totalTokens: 115_000,
totalTokensFresh: true,
contextTokens: 128_000,
},
],
},
}),
),
container,
);
expect(container.textContent).toContain("context used");
});
```
How can I resolve this? If you propose a fix, please make it concise.|
Closing as superseded by #71297. Both PRs address the same Control UI chat warning bug tracked by #49076: stale or cumulative context measurements causing misleading #71297 is the current canonical fix because it also applies live session metadata from gateway events, coalesces overlapping session refreshes so the value does not remain stale under chat churn, suppresses |
AI-assisted: Codex
Fixes #49076.
Summary
This change stops the Control UI from showing misleading
100% context usedwarnings when the session row only has accumulated input token counts rather than a fresh current-context snapshot.Root Cause
The chat view used
inputTokens / contextTokensfor the warning banner.inputTokensis accumulated across calls and tool loops, so it can reach the context limit even when the current live context is much smaller.Fix
The warning now keys off
totalTokens, which the gateway already persists as the current context-sized token total, and it suppresses the warning entirely when that cached total is marked stale. The UI tests now cover both the accumulated-input false positive and the stale-snapshot case.Validation
pnpm exec vitest run "ui/src/ui/views/chat.test.ts"