Skip to content

fix(ui): use current context usage in chat notice#48268

Closed
Kemalau wants to merge 8 commits intoopenclaw:mainfrom
Kemalau:clawoss/fix/control-ui-context-percent
Closed

fix(ui): use current context usage in chat notice#48268
Kemalau wants to merge 8 commits intoopenclaw:mainfrom
Kemalau:clawoss/fix/control-ui-context-percent

Conversation

@Kemalau
Copy link
Copy Markdown
Contributor

@Kemalau Kemalau commented Mar 16, 2026

Summary

  • use current session token usage for the Control UI context notice instead of lifetime input tokens
  • add a regression test covering sessions where lifetime input tokens exceed the active context window after compaction

Testing

  • pnpm exec vitest run --config vitest.unit.config.ts ui/src/ui/views/chat.test.ts

Why

After compaction, inputTokens can reflect lifetime accumulated usage while totalTokens reflects the active context window. The chat notice was using the lifetime value, which could incorrectly pin the UI at 100% context used.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 16, 2026

Greptile Summary

This PR fixes a bug where the context usage notice in the chat UI could incorrectly display 100%+ after a compaction event, by switching from inputTokens (lifetime accumulated) to totalTokens (current active context window) when calculating context utilisation.

  • ui/src/ui/views/chat.ts: renderContextNotice now reads session.totalTokens ?? session.inputTokens ?? 0 — preferring the current-window figure and falling back to inputTokens for sessions that don't yet populate totalTokens, preserving backward compatibility.
  • ui/src/ui/views/chat.test.ts: Adds a regression test confirming that when inputTokens exceeds the context window limit (post-compaction scenario) but totalTokens is within bounds, the UI displays the correct percentage and compact token counts.

Confidence Score: 5/5

  • This PR is safe to merge — the change is minimal, well-reasoned, and covered by a targeted regression test.
  • The fix is a single-line change with a clear fallback chain that preserves backward compatibility for sessions where totalTokens is not present. The new test directly exercises the post-compaction scenario described in the PR. No unrelated code is touched and there are no observable side-effects.
  • No files require special attention.

Last reviewed commit: b30323f

@openclaw-barnacle openclaw-barnacle Bot added the scripts Repository scripts label Mar 16, 2026
@openclaw-barnacle openclaw-barnacle Bot added the agents Agent runtime and tooling label Mar 16, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 96eb43f834

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

// Prefer the smallest reported maxRSS across process shutdown hooks.
// Some startup paths pull in helper processes/loaders that emit their own
// marker, and the CLI path under test only needs its direct process RSS.
return Math.min(...valuesMb);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Track main process RSS instead of minimum marker

Using Math.min(...valuesMb) here can under-report startup memory whenever the CLI command spawns helper Node processes that inherit the --import hook: a lightweight child can emit a smaller marker than the actual openclaw.mjs process, so the check passes even if the main CLI exceeds the configured limit. This turns the regression gate into a false-negative in exactly the multi-process startup paths this parser is trying to handle.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 52e3fb2921

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/acp/translator.ts
if (updates.length === 0) {
return;
}
await Promise.all(updates.map((update) => this.connection.sessionUpdate(update)));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Replay transcript updates sequentially

loadSession can request up to ACP_LOAD_SESSION_REPLAY_LIMIT (1,000,000) transcript messages, but this change dispatches every sessionUpdate in one Promise.all, creating an unbounded number of concurrent update promises. On large sessions this can spike memory and overwhelm the ACP transport, causing slowdowns or failures during session load; the previous per-message await avoided this by applying natural backpressure.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle Bot added the commands Command implementations label Mar 16, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4427f0b2d6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 108 to +109
const securityAudit = opts.json
? await runSecurityAudit()
? null
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Run JSON security audit in fallback status path

This change makes statusCommand({ json: true }) emit securityAudit: null instead of running runSecurityAudit, which creates inconsistent status --json output whenever route-first is disabled or bypassed (for example via OPENCLAW_DISABLE_ROUTE_FIRST, where src/cli/route.ts falls back to Commander handlers). In that environment, scripts that parse securityAudit.summary now lose data or break even though the routed JSON path (src/commands/status-json.ts) still provides a full audit payload.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5469709e3a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

showDetails: queueOverrides,
},
includeTranscriptUsage: true,
includeTranscriptUsage: false,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep transcript usage enabled for session_status output

buildStatusMessage only backfills prompt/context tokens from the session transcript when includeTranscriptUsage is true, which is the path that recovers missing cached-prompt usage from session metadata. Flipping this to false makes session_status under-report context usage for sessions where totalTokens in the store is incomplete, so the tool can report a safely-low usage percentage even when the active context is near limit.

Useful? React with 👍 / 👎.

@BunsDev
Copy link
Copy Markdown
Member

BunsDev commented Apr 25, 2026

Closing as superseded by #71297. This PR targets the same Control UI chat context notice problem: lifetime/cumulative token measurements being shown as active context usage after compaction.

#71297 is the narrower current review path for that behavior. It updates the chat notice freshness rules, merges live session metadata from gateway events before a full reload completes, coalesces session refreshes, and adds targeted regression tests. Any unrelated command/status/runtime changes should be split into a separate focused PR if they are still needed.

@BunsDev BunsDev closed this Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling app: web-ui App: web-ui commands Command implementations scripts Repository scripts size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants