Summary
openai-codex/gpt-5.4 was added in #36590, but the current forward-compat path still resolves it with the legacy Codex context window.
On current main, the openai-codex/gpt-5.4 fallback clones gpt-5.3-codex / gpt-5.2-codex templates without overriding contextWindow, so OpenClaw continues to treat it as a ~272k model locally.
Repro
- On current
main, openclaw models list shows openai-codex/gpt-5.4 at roughly 266k context.
src/agents/model-compat.test.ts currently expects resolveForwardCompatModel("openai-codex", "gpt-5.4", ...) to return contextWindow = 272_000.
- In
src/agents/model-forward-compat.ts, the openai-codex/gpt-5.4 forward-compat branch inherits template metadata instead of patching the GPT-5.4 window explicitly.
Impact
OpenClaw's local context accounting, model list output, and compaction decisions still use the old Codex window for openai-codex/gpt-5.4.
That can trigger unnecessary local compaction / retry behavior long before the backend request is actually at the GPT-5.4 limit.
Expected
openai-codex/gpt-5.4 should resolve with the GPT-5.4 window and max output budget instead of inheriting stale template values.
Proposed fix
When building the openai-codex/gpt-5.4 forward-compat model, explicitly override:
contextWindow = 1_050_000
maxTokens = 128_000
and update the existing forward-compat/list tests accordingly.
Summary
openai-codex/gpt-5.4was added in #36590, but the current forward-compat path still resolves it with the legacy Codex context window.On current
main, theopenai-codex/gpt-5.4fallback clonesgpt-5.3-codex/gpt-5.2-codextemplates without overridingcontextWindow, so OpenClaw continues to treat it as a ~272k model locally.Repro
main,openclaw models listshowsopenai-codex/gpt-5.4at roughly266kcontext.src/agents/model-compat.test.tscurrently expectsresolveForwardCompatModel("openai-codex", "gpt-5.4", ...)to returncontextWindow = 272_000.src/agents/model-forward-compat.ts, theopenai-codex/gpt-5.4forward-compat branch inherits template metadata instead of patching the GPT-5.4 window explicitly.Impact
OpenClaw's local context accounting, model list output, and compaction decisions still use the old Codex window for
openai-codex/gpt-5.4.That can trigger unnecessary local compaction / retry behavior long before the backend request is actually at the GPT-5.4 limit.
Expected
openai-codex/gpt-5.4should resolve with the GPT-5.4 window and max output budget instead of inheriting stale template values.Proposed fix
When building the
openai-codex/gpt-5.4forward-compat model, explicitly override:contextWindow = 1_050_000maxTokens = 128_000and update the existing forward-compat/list tests accordingly.