Bug type
Behavior bug (incorrect output/state without crash)
Summary
Control UI can show “100% context used” for a session whose actual reported context usage is much lower.
Steps to reproduce
- Open a long-running session in the Control UI / web chat.
- Observe the red
context used warning chip in the chat pane.
- Compare it with
/status or session status for the same session.
- Note that the two values can disagree significantly.
Expected behavior
The Control UI context warning should reflect the session’s actual current context usage, or at least closely match the value reported by session status.
Actual behavior
The Control UI showed roughly 292k / 272k and marked the session as 100% context used, while session status for the same session reported 105k / 272k (39%). Chat continued functioning normally, so the UI warning appeared misleading rather than reflecting real context exhaustion.
OpenClaw version
2026.3.13 (61d171a)
Operating system
macOS (Darwin 25.4.0, arm64)
Install method
npm global
Model
openai-codex/gpt-5.4
Provider / routing chain
openclaw -> openai-codex (oauth/default)
Config file / key location
Not obviously config-specific; observed in Control UI for session agent:main:main.
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Observed mismatch for the same session:
- Control UI warning: ~`292k / 272k` -> `100% context used`
- Session status: `105k / 272k (39%)`
- Compactions: `0`
The UI warning appears stale or based on a different metric than live/current context usage.
Impact and severity
Affected: Control UI / web chat users
Severity: Medium (misleading UI, causes false alarm)
Frequency: Reproducible on longer-running sessions
Consequence: Users may think context is exhausted when it is not, and may compact/reset unnecessarily
Additional information
Global defaults were set to openai-codex/gpt-5.4 with xhigh thinking. The issue appears UI-side and does not seem tied to a special provider-routing chain or custom model proxy setup.
Bug type
Behavior bug (incorrect output/state without crash)
Summary
Control UI can show “100% context used” for a session whose actual reported context usage is much lower.
Steps to reproduce
context usedwarning chip in the chat pane./statusor session status for the same session.Expected behavior
The Control UI context warning should reflect the session’s actual current context usage, or at least closely match the value reported by session status.
Actual behavior
The Control UI showed roughly
292k / 272kand marked the session as100% context used, while session status for the same session reported105k / 272k (39%). Chat continued functioning normally, so the UI warning appeared misleading rather than reflecting real context exhaustion.OpenClaw version
2026.3.13 (61d171a)
Operating system
macOS (Darwin 25.4.0, arm64)
Install method
npm global
Model
openai-codex/gpt-5.4
Provider / routing chain
openclaw -> openai-codex (oauth/default)
Config file / key location
Not obviously config-specific; observed in Control UI for session
agent:main:main.Additional provider/model setup details
No response
Logs, screenshots, and evidence
Impact and severity
Affected: Control UI / web chat users
Severity: Medium (misleading UI, causes false alarm)
Frequency: Reproducible on longer-running sessions
Consequence: Users may think context is exhausted when it is not, and may compact/reset unnecessarily
Additional information
Global defaults were set to
openai-codex/gpt-5.4withxhighthinking. The issue appears UI-side and does not seem tied to a special provider-routing chain or custom model proxy setup.