Skip to content

fix stale Control UI context warning math#49113

Closed
biefan wants to merge 1 commit intoopenclaw:mainfrom
biefan:codex/fix-ui-context-warning-usage-source
Closed

fix stale Control UI context warning math#49113
biefan wants to merge 1 commit intoopenclaw:mainfrom
biefan:codex/fix-ui-context-warning-usage-source

Conversation

@biefan
Copy link
Copy Markdown

@biefan biefan commented Mar 17, 2026

AI-assisted: Codex

Fixes #49076.

Summary

This change stops the Control UI from showing misleading 100% context used warnings when the session row only has accumulated input token counts rather than a fresh current-context snapshot.

Root Cause

The chat view used inputTokens / contextTokens for the warning banner. inputTokens is accumulated across calls and tool loops, so it can reach the context limit even when the current live context is much smaller.

Fix

The warning now keys off totalTokens, which the gateway already persists as the current context-sized token total, and it suppresses the warning entirely when that cached total is marked stale. The UI tests now cover both the accumulated-input false positive and the stale-snapshot case.

Validation

  • pnpm exec vitest run "ui/src/ui/views/chat.test.ts"

@biefan biefan changed the title [codex] fix stale Control UI context warning math fix stale Control UI context warning math Mar 17, 2026
@biefan biefan marked this pull request as ready for review March 17, 2026 16:52
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 17, 2026

Greptile Summary

This PR fixes a false-positive "100% context used" warning in the chat view by switching from the accumulated inputTokens counter to the gateway-persisted totalTokens snapshot as the basis for context-usage calculation, and suppressing the warning entirely when that snapshot is marked stale (totalTokensFresh === false).

  • ui/src/ui/types.ts: Adds the totalTokensFresh?: boolean field to GatewaySessionRow, aligning the UI type with the backend session types.
  • ui/src/ui/views/chat.ts: renderContextNotice now uses totalTokens (fresh context size) instead of inputTokens (cumulative), and returns nothing when the cached snapshot is stale — preventing misleading banners during active tool loops.
  • ui/src/ui/views/chat.test.ts: Two new negative-case tests guard the fix, but a positive-case test (warning appears when totalTokensFresh: true and totalTokens is above the 85% threshold) is missing, leaving a potential blind spot for future regressions.

Confidence Score: 4/5

  • Safe to merge; the logic fix is correct and well-aligned with the gateway's token freshness semantics.
  • The one-line change in chat.ts is clearly correct — totalTokens reflects the live context size and the gateway already normalizes totalTokensFresh to true/false before sending it to the UI, so the undefined edge case is benign. The only gap is the absence of a positive test asserting the warning still fires under the right conditions, which slightly lowers confidence that a future accidental no-op won't regress silently.
  • No files require special attention; the test file could benefit from an additional positive-case assertion.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: ui/src/ui/views/chat.test.ts
Line: 252-307

Comment:
**Missing positive-case test for context warning**

Both new tests only verify that the warning is *suppressed*. There is no test asserting that the warning *does* appear when `totalTokensFresh` is `true` (or unset) and `totalTokens` is above the 85% threshold. Without a positive case, a future regression that accidentally sets `used = 0` unconditionally would go undetected.

Consider adding a test like:

```ts
it("shows the context warning when totalTokensFresh is true and total tokens exceed threshold", () => {
  const container = document.createElement("div");
  render(
    renderChat(
      createProps({
        sessions: {
          ts: 0,
          path: "",
          count: 1,
          defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
          sessions: [
            {
              key: "main",
              kind: "direct",
              updatedAt: null,
              totalTokens: 115_000,
              totalTokensFresh: true,
              contextTokens: 128_000,
            },
          ],
        },
      }),
    ),
    container,
  );
  expect(container.textContent).toContain("context used");
});
```

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: 78281d2

Comment on lines +252 to +307
it("does not show the context warning from accumulated input tokens when current context is below threshold", () => {
const container = document.createElement("div");
render(
renderChat(
createProps({
sessions: {
ts: 0,
path: "",
count: 1,
defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
sessions: [
{
key: "main",
kind: "direct",
updatedAt: null,
inputTokens: 128_000,
totalTokens: 64_000,
contextTokens: 128_000,
},
],
},
}),
),
container,
);

expect(container.textContent).not.toContain("context used");
});

it("hides the context warning when the cached total token snapshot is stale", () => {
const container = document.createElement("div");
render(
renderChat(
createProps({
sessions: {
ts: 0,
path: "",
count: 1,
defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
sessions: [
{
key: "main",
kind: "direct",
updatedAt: null,
totalTokens: 120_000,
totalTokensFresh: false,
contextTokens: 128_000,
},
],
},
}),
),
container,
);

expect(container.textContent).not.toContain("context used");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Missing positive-case test for context warning

Both new tests only verify that the warning is suppressed. There is no test asserting that the warning does appear when totalTokensFresh is true (or unset) and totalTokens is above the 85% threshold. Without a positive case, a future regression that accidentally sets used = 0 unconditionally would go undetected.

Consider adding a test like:

it("shows the context warning when totalTokensFresh is true and total tokens exceed threshold", () => {
  const container = document.createElement("div");
  render(
    renderChat(
      createProps({
        sessions: {
          ts: 0,
          path: "",
          count: 1,
          defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
          sessions: [
            {
              key: "main",
              kind: "direct",
              updatedAt: null,
              totalTokens: 115_000,
              totalTokensFresh: true,
              contextTokens: 128_000,
            },
          ],
        },
      }),
    ),
    container,
  );
  expect(container.textContent).toContain("context used");
});
Prompt To Fix With AI
This is a comment left during a code review.
Path: ui/src/ui/views/chat.test.ts
Line: 252-307

Comment:
**Missing positive-case test for context warning**

Both new tests only verify that the warning is *suppressed*. There is no test asserting that the warning *does* appear when `totalTokensFresh` is `true` (or unset) and `totalTokens` is above the 85% threshold. Without a positive case, a future regression that accidentally sets `used = 0` unconditionally would go undetected.

Consider adding a test like:

```ts
it("shows the context warning when totalTokensFresh is true and total tokens exceed threshold", () => {
  const container = document.createElement("div");
  render(
    renderChat(
      createProps({
        sessions: {
          ts: 0,
          path: "",
          count: 1,
          defaults: { modelProvider: "openai", model: "gpt-5", contextTokens: 128_000 },
          sessions: [
            {
              key: "main",
              kind: "direct",
              updatedAt: null,
              totalTokens: 115_000,
              totalTokensFresh: true,
              contextTokens: 128_000,
            },
          ],
        },
      }),
    ),
    container,
  );
  expect(container.textContent).toContain("context used");
});
```

How can I resolve this? If you propose a fix, please make it concise.

@BunsDev
Copy link
Copy Markdown
Member

BunsDev commented Apr 25, 2026

Closing as superseded by #71297. Both PRs address the same Control UI chat warning bug tracked by #49076: stale or cumulative context measurements causing misleading 100% context used UI.

#71297 is the current canonical fix because it also applies live session metadata from gateway events, coalesces overlapping session refreshes so the value does not remain stale under chat churn, suppresses totalTokensFresh=false snapshots, and adds focused coverage for those behaviors.

@BunsDev BunsDev closed this Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Control UI "100% context used" warning can be incorrect / stale and does not match actual session context

2 participants