Bug type
Behavior bug (incorrect model metadata / premature compaction risk)
Summary
On an installed OpenClaw 2026.3.24, openai-codex/gpt-5.4 still resolves locally as roughly 266k/272k context in openclaw models list, even though upstream work around GPT-5.4 large-context support has already landed.
In practice, this can trigger unnecessarily early local compaction pressure for long sessions.
A supported config override under models.providers.openai-codex.models[] fixes it immediately on this machine and restores a ~1M-class context window for runtime behavior.
Why I think this is still an active bug path
Issue #37875 and PR #37876 addressed the forward-compat fallback path, but the installed 2026.3.24 build here still showed:
openclaw models list -> openai-codex/gpt-5.4 ... 266k
After adding an explicit provider-model override in config and restarting the gateway, the same install showed:
openclaw models list -> openai-codex/gpt-5.4 ... 1025k
/status in the live Telegram topic session -> Context: 560k/1.1m (53%)
That strongly suggests there is still a runtime/catalog resolution path where the installed build inherits stale Codex template limits unless the provider model is explicitly overridden.
Environment
- OpenClaw:
2026.3.24
- Model:
openai-codex/gpt-5.4
- Auth: Codex OAuth (
openai-codex:default)
- Channel: Telegram forum topic session
- Gateway mode: local macOS launch agent
Repro
- Install/update OpenClaw to
2026.3.24
- Set default model to
openai-codex/gpt-5.4
- Run:
openclaw models list | rg 'openai-codex/gpt-5\.4|gpt-5\.4'
- Observe the context window is still reported around
266k
Observed before workaround
Example local output:
openai-codex/gpt-5.4 text 266k no yes default,configured
This was operationally significant because a long Telegram topic session kept getting pushed toward compaction much earlier than expected.
Supported config workaround that fixed it here
This config block was added:
{
models: {
mode: "merge",
providers: {
"openai-codex": {
baseUrl: "https://chatgpt.com/backend-api",
api: "openai-codex-responses",
models: [
{
id: "gpt-5.4",
name: "gpt-5.4",
contextWindow: 1050000,
maxTokens: 128000,
},
],
},
},
},
}
After gateway restart:
openai-codex/gpt-5.4 text 1025k no yes default,configured
And in the live session /status:
🧠 Model: openai-codex/gpt-5.4 · 🔑 oauth (openai-codex:default)
🧮 Tokens: 560k in / 381 out
📚 Context: 560k/1.1m (53%) · 🧹 Compactions: 1
Expected behavior
Installed OpenClaw should resolve openai-codex/gpt-5.4 with the correct large context window by default, without requiring an explicit models.providers.openai-codex.models[] override.
Actual behavior
Installed OpenClaw 2026.3.24 still reports and appears to use a ~272k-class local window unless explicitly overridden in config.
Impact
- premature local compaction
- unnecessary compaction timeout exposure
- misleading model/status output
- avoidable risk for long-lived topic/thread sessions
Suggestion
Please check whether the #37876 fix is present only in one forward-compat/catalog path while installed/runtime resolution still falls back to stale Codex metadata somewhere else.
At minimum, the config workaround above is effective and could be documented as an interim fix if there is still another unresolved path.
Bug type
Behavior bug (incorrect model metadata / premature compaction risk)
Summary
On an installed OpenClaw
2026.3.24,openai-codex/gpt-5.4still resolves locally as roughly266k/272kcontext inopenclaw models list, even though upstream work around GPT-5.4 large-context support has already landed.In practice, this can trigger unnecessarily early local compaction pressure for long sessions.
A supported config override under
models.providers.openai-codex.models[]fixes it immediately on this machine and restores a ~1M-class context window for runtime behavior.Why I think this is still an active bug path
Issue #37875 and PR #37876 addressed the forward-compat fallback path, but the installed
2026.3.24build here still showed:openclaw models list->openai-codex/gpt-5.4 ... 266kAfter adding an explicit provider-model override in config and restarting the gateway, the same install showed:
openclaw models list->openai-codex/gpt-5.4 ... 1025k/statusin the live Telegram topic session ->Context: 560k/1.1m (53%)That strongly suggests there is still a runtime/catalog resolution path where the installed build inherits stale Codex template limits unless the provider model is explicitly overridden.
Environment
2026.3.24openai-codex/gpt-5.4openai-codex:default)Repro
2026.3.24openai-codex/gpt-5.4266kObserved before workaround
Example local output:
This was operationally significant because a long Telegram topic session kept getting pushed toward compaction much earlier than expected.
Supported config workaround that fixed it here
This config block was added:
After gateway restart:
And in the live session
/status:Expected behavior
Installed OpenClaw should resolve
openai-codex/gpt-5.4with the correct large context window by default, without requiring an explicitmodels.providers.openai-codex.models[]override.Actual behavior
Installed OpenClaw
2026.3.24still reports and appears to use a ~272k-class local window unless explicitly overridden in config.Impact
Suggestion
Please check whether the
#37876fix is present only in one forward-compat/catalog path while installed/runtime resolution still falls back to stale Codex metadata somewhere else.At minimum, the config workaround above is effective and could be documented as an interim fix if there is still another unresolved path.