Skip to content

openai-codex/gpt-5.4 still uses 272000 context for catalog + runtime context engineering despite PR #37876 #42225

@luizlf

Description

@luizlf

Summary

After updating to the current OpenClaw release, openclaw models list still shows openai-codex/gpt-5.4 as ~266k context, and the runtime context-engineering path is also using 272000 for pruning, compaction, and token budgeting.

PR #37876 added a forward-compat patch for GPT-5.4 with contextWindow = 1_050_000, but that patch is not reached when the model already exists in the Pi SDK catalog.

Environment

  • OpenClaw: 2026.3.8
  • @mariozechner/pi-ai: 0.57.1
  • @mariozechner/pi-coding-agent: 0.57.1

Observed behavior

  • openclaw models list --json reports:
    • openai-codex/gpt-5.4.contextWindow = 272000
  • Runtime context engineering also uses 272000 for GPT-5.4:
    • context pruning
    • compaction thresholds
    • token budgeting / max-context calculations

Expected behavior

openai-codex/gpt-5.4 should use the larger GPT-5.4 context window introduced by PR #37876 (1_050_000) consistently for:

  • catalog display
  • runtime resolution
  • context pruning / compaction / budgeting

Root cause

There are currently two conflicting sources of truth:

  1. Pi static catalog
    From the Pi packages published from the badlogic/pi-mono repo, the built model registry still defines GPT-5.4 with a stale value:

    • package: @mariozechner/pi-ai
    • file: dist/models.generated.js
    • entry:
    contextWindow: 272000
  2. OpenClaw forward-compat patch
    PR fix(models): use 1M context for openai-codex gpt-5.4 #37876 added:

    OPENAI_CODEX_GPT_54_CONTEXT_TOKENS = 1_050_000

    in OpenClaw’s model-forward-compat.ts

The problem is resolution order:

  • OpenClaw first resolves from the built-in model registry loaded from the Pi packages
  • since gpt-5.4 already exists there, resolution succeeds with the stale 272000 entry
  • the forward-compat path is never reached
  • runtime context-window consumers therefore inherit 272000

So PR #37876 fixed the forward-compat path, but not the common path used once the model is already present in the Pi catalog.

Concrete evidence

Stale catalog value from Pi package:

  • /opt/homebrew/lib/node_modules/openclaw/node_modules/@mariozechner/pi-ai/dist/models.generated.js
  • openai-codex -> gpt-5.4 -> contextWindow: 272000

Forward-compat patched value exists in OpenClaw:

But runtime still resolves the Pi catalog entry first, so subsystems that read model.contextWindow use 272000.

Impact

This is not just a cosmetic models list issue. The stale value affects actual runtime behavior:

  • pruning earlier than necessary
  • compaction thresholds based on 272k instead of ~1M
  • reduced effective context utilization for GPT-5.4

Suggested fixes

Any of these would solve it:

  1. Preferred: update the Pi model registry in badlogic/pi-mono so the generated catalog used by @mariozechner/pi-ai defines openai-codex/gpt-5.4 with the correct contextWindow
  2. Make OpenClaw’s forward-compat patch able to override stale catalog entries, not just synthesize missing models
  3. Add a catalog-level post-load patch in OpenClaw so known stale built-in entries get corrected before both listing and runtime use

Minimal repro

  1. Install current OpenClaw release
  2. Configure primary model as openai-codex/gpt-5.4
  3. Run:
    openclaw models list --json
  4. Observe contextWindow: 272000
  5. Trace runtime model resolution / context-window guard and see that 272000 is also used for pruning/compaction/token budgeting

Notes

PR #37876 appears correct, but it only fixes the OpenClaw forward-compat layer. Once GPT-5.4 exists in the Pi catalog with stale metadata, that layer is bypassed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions