Summary
On macOS with OpenClaw 2026.4.15, openai-codex/gpt-5.4 can resolve to different context limits depending on which layer you look at, especially after reboot or fresh session creation.
Observed split:
- fresh Telegram DM session seeded at
200000
- Mac app model catalog showed
272000 for openai-codex -> gpt-5.4
- provider/runtime layer knew GPT-5.4 should be
1050000
This makes the system feel broken even when the configured model is correct.
Environment
- OpenClaw:
2026.4.15
- Install:
pnpm
- OS: macOS 26.4.1 arm64
- Model path:
openai-codex/gpt-5.4
- Config: no
agents.defaults.contextTokens, no GPT-5.4 context override in ~/.openclaw/openclaw.json
Symptoms
- OpenClaw app / status showed Telegram DM session at
200k
- Mac app model-catalog cache showed
272k
- Gateway/provider layer had GPT-5.4 default/native context at
1050000
- Reboot + fresh session brought the bad values back even after a previous live-session fix looked correct
Repro
- Set default model to
openai-codex/gpt-5.4
- Restart gateway
- Start a fresh Telegram DM session or use
/new
- Check:
openclaw status
~/.openclaw/agents/main/sessions/sessions.json
~/Library/Application Support/OpenClaw/model-catalog/models.generated.js
Observed result
- live session may be created with
contextTokens: 200000
- app catalog may still report
272000
Expected result
GPT-5.4 should resolve consistently to the full 1M-class context budget across:
- fresh session creation
- app display
openclaw status
- cached model catalog
Findings
1. Fresh session seeding path falls back to 200k
In:
/opt/homebrew/lib/node_modules/openclaw/dist/session-store.runtime-D0Wf4U9r.js
Observed logic:
resolveContextTokensForModel(... allowAsyncLoad: false, fallbackContextTokens: 2e5) ?? 2e5
If model context lookup does not resolve synchronously, the session gets seeded with 200000.
2. Inline/directive persistence path also falls back to 200k
In:
/opt/homebrew/lib/node_modules/openclaw/dist/directive-handling.persist.runtime-XUaMaVxD.js
Observed logic:
resolveContextTokensForModel(... allowAsyncLoad: false) ?? 2e5
3. Session defaults display path also prefers sync lookup
In:
/opt/homebrew/lib/node_modules/openclaw/dist/session-utils-hDe0hdis.js
Observed logic:
lookupContextTokens(resolved.model, { allowAsyncLoad: false }) ?? 2e5
4. App-side cached model catalog can drift from provider/runtime truth
In:
~/Library/Application Support/OpenClaw/model-catalog/models.generated.js
Observed value before local patch:
openai-codex -> gpt-5.4 -> contextWindow: 272000
5. Provider/runtime layer had the correct larger value
In:
/opt/homebrew/lib/node_modules/openclaw/dist/openai-codex-provider-QEuYACvP.js
Observed value:
OPENAI_CODEX_GPT_54_DEFAULT_CONTEXT_TOKENS = 105e4
Why this is confusing
Three layers disagree:
- provider/runtime says one thing
- app catalog says another
- live session store says a third
So users can be correctly configured and still see wildly wrong context numbers.
Local workaround that fixed the live session
As a local test, changing the session/runtime lookup paths from allowAsyncLoad: false to allowAsyncLoad: true, aligning the cached model catalog, and repairing the live session entry made the current Telegram DM session show correctly as 130k/1050k in openclaw status.
That suggests the upstream issue is not model support itself, but inconsistent context resolution paths.
Suggested fix
- Stop seeding fresh sessions with
2e5 when model lookup is unresolved but provider/model metadata is known.
- Make session creation, status display, and app model catalog derive GPT-5.4 context from one consistent source.
- Invalidate or refresh model-catalog cache when provider/runtime metadata changes.
- Add regression coverage for
openai-codex/gpt-5.4 across:
- fresh session creation
- reboot/relaunch
- app model catalog display
openclaw status
One-line summary
OpenClaw 2026.4.15 can correctly route openai-codex/gpt-5.4 while still incorrectly seeding or displaying its context budget as 200k or 272k after reboot, because session seeding, status display, and app catalog resolve context from inconsistent paths.
Summary
On macOS with OpenClaw
2026.4.15,openai-codex/gpt-5.4can resolve to different context limits depending on which layer you look at, especially after reboot or fresh session creation.Observed split:
200000272000foropenai-codex -> gpt-5.41050000This makes the system feel broken even when the configured model is correct.
Environment
2026.4.15pnpmopenai-codex/gpt-5.4agents.defaults.contextTokens, no GPT-5.4 context override in~/.openclaw/openclaw.jsonSymptoms
200k272k1050000Repro
openai-codex/gpt-5.4/newopenclaw status~/.openclaw/agents/main/sessions/sessions.json~/Library/Application Support/OpenClaw/model-catalog/models.generated.jsObserved result
contextTokens: 200000272000Expected result
GPT-5.4 should resolve consistently to the full 1M-class context budget across:
openclaw statusFindings
1. Fresh session seeding path falls back to 200k
In:
/opt/homebrew/lib/node_modules/openclaw/dist/session-store.runtime-D0Wf4U9r.jsObserved logic:
If model context lookup does not resolve synchronously, the session gets seeded with
200000.2. Inline/directive persistence path also falls back to 200k
In:
/opt/homebrew/lib/node_modules/openclaw/dist/directive-handling.persist.runtime-XUaMaVxD.jsObserved logic:
3. Session defaults display path also prefers sync lookup
In:
/opt/homebrew/lib/node_modules/openclaw/dist/session-utils-hDe0hdis.jsObserved logic:
4. App-side cached model catalog can drift from provider/runtime truth
In:
~/Library/Application Support/OpenClaw/model-catalog/models.generated.jsObserved value before local patch:
openai-codex -> gpt-5.4 -> contextWindow: 2720005. Provider/runtime layer had the correct larger value
In:
/opt/homebrew/lib/node_modules/openclaw/dist/openai-codex-provider-QEuYACvP.jsObserved value:
Why this is confusing
Three layers disagree:
So users can be correctly configured and still see wildly wrong context numbers.
Local workaround that fixed the live session
As a local test, changing the session/runtime lookup paths from
allowAsyncLoad: falsetoallowAsyncLoad: true, aligning the cached model catalog, and repairing the live session entry made the current Telegram DM session show correctly as130k/1050kinopenclaw status.That suggests the upstream issue is not model support itself, but inconsistent context resolution paths.
Suggested fix
2e5when model lookup is unresolved but provider/model metadata is known.openai-codex/gpt-5.4across:openclaw statusOne-line summary
OpenClaw
2026.4.15can correctly routeopenai-codex/gpt-5.4while still incorrectly seeding or displaying its context budget as200kor272kafter reboot, because session seeding, status display, and app catalog resolve context from inconsistent paths.