You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After updating to the current OpenClaw release, openclaw models list still shows openai-codex/gpt-5.4 as ~266k context, and the runtime context-engineering path is also using 272000 for pruning, compaction, and token budgeting.
PR #37876 added a forward-compat patch for GPT-5.4 with contextWindow = 1_050_000, but that patch is not reached when the model already exists in the Pi SDK catalog.
Environment
OpenClaw: 2026.3.8
@mariozechner/pi-ai: 0.57.1
@mariozechner/pi-coding-agent: 0.57.1
Observed behavior
openclaw models list --json reports:
openai-codex/gpt-5.4.contextWindow = 272000
Runtime context engineering also uses 272000 for GPT-5.4:
context pruning
compaction thresholds
token budgeting / max-context calculations
Expected behavior
openai-codex/gpt-5.4 should use the larger GPT-5.4 context window introduced by PR #37876 (1_050_000) consistently for:
catalog display
runtime resolution
context pruning / compaction / budgeting
Root cause
There are currently two conflicting sources of truth:
Pi static catalog
From the Pi packages published from the badlogic/pi-mono repo, the built model registry still defines GPT-5.4 with a stale value:
But runtime still resolves the Pi catalog entry first, so subsystems that read model.contextWindow use 272000.
Impact
This is not just a cosmetic models list issue. The stale value affects actual runtime behavior:
pruning earlier than necessary
compaction thresholds based on 272k instead of ~1M
reduced effective context utilization for GPT-5.4
Suggested fixes
Any of these would solve it:
Preferred: update the Pi model registry in badlogic/pi-mono so the generated catalog used by @mariozechner/pi-ai defines openai-codex/gpt-5.4 with the correct contextWindow
Make OpenClaw’s forward-compat patch able to override stale catalog entries, not just synthesize missing models
Add a catalog-level post-load patch in OpenClaw so known stale built-in entries get corrected before both listing and runtime use
Minimal repro
Install current OpenClaw release
Configure primary model as openai-codex/gpt-5.4
Run:
openclaw models list --json
Observe contextWindow: 272000
Trace runtime model resolution / context-window guard and see that 272000 is also used for pruning/compaction/token budgeting
Notes
PR #37876 appears correct, but it only fixes the OpenClaw forward-compat layer. Once GPT-5.4 exists in the Pi catalog with stale metadata, that layer is bypassed.
Summary
After updating to the current OpenClaw release,
openclaw models liststill showsopenai-codex/gpt-5.4as ~266k context, and the runtime context-engineering path is also using272000for pruning, compaction, and token budgeting.PR #37876 added a forward-compat patch for GPT-5.4 with
contextWindow = 1_050_000, but that patch is not reached when the model already exists in the Pi SDK catalog.Environment
2026.3.8@mariozechner/pi-ai:0.57.1@mariozechner/pi-coding-agent:0.57.1Observed behavior
openclaw models list --jsonreports:openai-codex/gpt-5.4.contextWindow = 272000272000for GPT-5.4:Expected behavior
openai-codex/gpt-5.4should use the larger GPT-5.4 context window introduced by PR #37876 (1_050_000) consistently for:Root cause
There are currently two conflicting sources of truth:
Pi static catalog
From the Pi packages published from the
badlogic/pi-monorepo, the built model registry still defines GPT-5.4 with a stale value:@mariozechner/pi-aidist/models.generated.jscontextWindow: 272000OpenClaw forward-compat patch
PR fix(models): use 1M context for openai-codex gpt-5.4 #37876 added:
in OpenClaw’s
model-forward-compat.tsThe problem is resolution order:
gpt-5.4already exists there, resolution succeeds with the stale272000entry272000So PR #37876 fixed the forward-compat path, but not the common path used once the model is already present in the Pi catalog.
Concrete evidence
Stale catalog value from Pi package:
/opt/homebrew/lib/node_modules/openclaw/node_modules/@mariozechner/pi-ai/dist/models.generated.jsopenai-codex -> gpt-5.4 -> contextWindow: 272000Forward-compat patched value exists in OpenClaw:
1050000GPT-5.4 patch from PR fix(models): use 1M context for openai-codex gpt-5.4 #37876But runtime still resolves the Pi catalog entry first, so subsystems that read
model.contextWindowuse272000.Impact
This is not just a cosmetic
models listissue. The stale value affects actual runtime behavior:Suggested fixes
Any of these would solve it:
badlogic/pi-monoso the generated catalog used by@mariozechner/pi-aidefinesopenai-codex/gpt-5.4with the correctcontextWindowMinimal repro
openai-codex/gpt-5.4contextWindow: 272000272000is also used for pruning/compaction/token budgetingNotes
PR #37876 appears correct, but it only fixes the OpenClaw forward-compat layer. Once GPT-5.4 exists in the Pi catalog with stale metadata, that layer is bypassed.