Skip to content

[Bug]: Control UI "100% context used" warning can be incorrect / stale and does not match actual session context #49076

@allenhurff

Description

@allenhurff

Bug type

Behavior bug (incorrect output/state without crash)

Summary

Control UI can show “100% context used” for a session whose actual reported context usage is much lower.

Steps to reproduce

  1. Open a long-running session in the Control UI / web chat.
  2. Observe the red context used warning chip in the chat pane.
  3. Compare it with /status or session status for the same session.
  4. Note that the two values can disagree significantly.

Expected behavior

The Control UI context warning should reflect the session’s actual current context usage, or at least closely match the value reported by session status.

Actual behavior

The Control UI showed roughly 292k / 272k and marked the session as 100% context used, while session status for the same session reported 105k / 272k (39%). Chat continued functioning normally, so the UI warning appeared misleading rather than reflecting real context exhaustion.

OpenClaw version

2026.3.13 (61d171a)

Operating system

macOS (Darwin 25.4.0, arm64)

Install method

npm global

Model

openai-codex/gpt-5.4

Provider / routing chain

openclaw -> openai-codex (oauth/default)

Config file / key location

Not obviously config-specific; observed in Control UI for session agent:main:main.

Additional provider/model setup details

No response

Logs, screenshots, and evidence

Observed mismatch for the same session:

- Control UI warning: ~`292k / 272k` -> `100% context used`
- Session status: `105k / 272k (39%)`
- Compactions: `0`

The UI warning appears stale or based on a different metric than live/current context usage.

Impact and severity

Affected: Control UI / web chat users
Severity: Medium (misleading UI, causes false alarm)
Frequency: Reproducible on longer-running sessions
Consequence: Users may think context is exhausted when it is not, and may compact/reset unnecessarily

Additional information

Global defaults were set to openai-codex/gpt-5.4 with xhigh thinking. The issue appears UI-side and does not seem tied to a special provider-routing chain or custom model proxy setup.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingbug:behaviorIncorrect behavior without a crash

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions