Skip to content

GPT-5.4 context window resolves inconsistently after reboot/fresh session (200k / 272k / 1050k disagreement) #69453

@marklights54-byte

Description

@marklights54-byte

Summary

On macOS with OpenClaw 2026.4.15, openai-codex/gpt-5.4 can resolve to different context limits depending on which layer you look at, especially after reboot or fresh session creation.

Observed split:

  • fresh Telegram DM session seeded at 200000
  • Mac app model catalog showed 272000 for openai-codex -> gpt-5.4
  • provider/runtime layer knew GPT-5.4 should be 1050000

This makes the system feel broken even when the configured model is correct.

Environment

  • OpenClaw: 2026.4.15
  • Install: pnpm
  • OS: macOS 26.4.1 arm64
  • Model path: openai-codex/gpt-5.4
  • Config: no agents.defaults.contextTokens, no GPT-5.4 context override in ~/.openclaw/openclaw.json

Symptoms

  1. OpenClaw app / status showed Telegram DM session at 200k
  2. Mac app model-catalog cache showed 272k
  3. Gateway/provider layer had GPT-5.4 default/native context at 1050000
  4. Reboot + fresh session brought the bad values back even after a previous live-session fix looked correct

Repro

  1. Set default model to openai-codex/gpt-5.4
  2. Restart gateway
  3. Start a fresh Telegram DM session or use /new
  4. Check:
    • openclaw status
    • ~/.openclaw/agents/main/sessions/sessions.json
    • ~/Library/Application Support/OpenClaw/model-catalog/models.generated.js

Observed result

  • live session may be created with contextTokens: 200000
  • app catalog may still report 272000

Expected result

GPT-5.4 should resolve consistently to the full 1M-class context budget across:

  • fresh session creation
  • app display
  • openclaw status
  • cached model catalog

Findings

1. Fresh session seeding path falls back to 200k

In:

  • /opt/homebrew/lib/node_modules/openclaw/dist/session-store.runtime-D0Wf4U9r.js

Observed logic:

resolveContextTokensForModel(... allowAsyncLoad: false, fallbackContextTokens: 2e5) ?? 2e5

If model context lookup does not resolve synchronously, the session gets seeded with 200000.

2. Inline/directive persistence path also falls back to 200k

In:

  • /opt/homebrew/lib/node_modules/openclaw/dist/directive-handling.persist.runtime-XUaMaVxD.js

Observed logic:

resolveContextTokensForModel(... allowAsyncLoad: false) ?? 2e5

3. Session defaults display path also prefers sync lookup

In:

  • /opt/homebrew/lib/node_modules/openclaw/dist/session-utils-hDe0hdis.js

Observed logic:

lookupContextTokens(resolved.model, { allowAsyncLoad: false }) ?? 2e5

4. App-side cached model catalog can drift from provider/runtime truth

In:

  • ~/Library/Application Support/OpenClaw/model-catalog/models.generated.js

Observed value before local patch:

  • openai-codex -> gpt-5.4 -> contextWindow: 272000

5. Provider/runtime layer had the correct larger value

In:

  • /opt/homebrew/lib/node_modules/openclaw/dist/openai-codex-provider-QEuYACvP.js

Observed value:

OPENAI_CODEX_GPT_54_DEFAULT_CONTEXT_TOKENS = 105e4

Why this is confusing

Three layers disagree:

  • provider/runtime says one thing
  • app catalog says another
  • live session store says a third

So users can be correctly configured and still see wildly wrong context numbers.

Local workaround that fixed the live session

As a local test, changing the session/runtime lookup paths from allowAsyncLoad: false to allowAsyncLoad: true, aligning the cached model catalog, and repairing the live session entry made the current Telegram DM session show correctly as 130k/1050k in openclaw status.

That suggests the upstream issue is not model support itself, but inconsistent context resolution paths.

Suggested fix

  1. Stop seeding fresh sessions with 2e5 when model lookup is unresolved but provider/model metadata is known.
  2. Make session creation, status display, and app model catalog derive GPT-5.4 context from one consistent source.
  3. Invalidate or refresh model-catalog cache when provider/runtime metadata changes.
  4. Add regression coverage for openai-codex/gpt-5.4 across:
    • fresh session creation
    • reboot/relaunch
    • app model catalog display
    • openclaw status

One-line summary

OpenClaw 2026.4.15 can correctly route openai-codex/gpt-5.4 while still incorrectly seeding or displaying its context budget as 200k or 272k after reboot, because session seeding, status display, and app catalog resolve context from inconsistent paths.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions