Skip to content

Agents: prefer runtime-resolved metadata for explicit codex gpt-5.4#62694

Merged
obviyus merged 6 commits intoopenclaw:mainfrom
ruclaw7:fix/codex-gpt54-resolution
Apr 8, 2026
Merged

Agents: prefer runtime-resolved metadata for explicit codex gpt-5.4#62694
obviyus merged 6 commits intoopenclaw:mainfrom
ruclaw7:fix/codex-gpt54-resolution

Conversation

@ruclaw7
Copy link
Copy Markdown
Contributor

@ruclaw7 ruclaw7 commented Apr 7, 2026

Summary

Fixes openai-codex/gpt-5.4 resolution so an explicit model selection can still prefer provider runtime metadata when the provider reports a larger context window than the static registry entry.

This PR now:

  • lets the openai-codex plugin opt into runtime-preferred resolution via a provider-owned preferRuntimeResolvedModel hook instead of a hardcoded core special case
  • compares the explicit registry model with the provider runtime-resolved model only for providers that opt in, so normal explicit-model resolution still short-circuits for everyone else
  • preserves the correct runtime hook inputs by passing the configured workspace directory as workspaceDir while keeping agentDir available separately in context
  • adds focused regression coverage for the gpt-5.4 runtime-preference path
  • includes the follow-up review fixes that repaired the runtime hook plumbing and workspace handling

Testing

  • pnpm exec vitest run src/agents/pi-embedded-runner/model.test.ts
  • pnpm build

Fixes #55461

Thanks to Rudi Cilibrasi and Metagood.com for building and funding this fix.

@openclaw-barnacle openclaw-barnacle Bot added agents Agent runtime and tooling size: S labels Apr 7, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 7, 2026

Greptile Summary

This PR makes resolveModelWithRegistry and resolveModelAsync prefer the plugin-dynamic openai-codex/gpt-5.4 model over the registry-resolved one when it reports a larger context window, and adds focused sync and async regression tests for the short-circuit path. All findings are P2.

Confidence Score: 5/5

Safe to merge — both findings are non-blocking style suggestions with no present runtime defect.

Both findings are P2: one is an architectural style concern about hardcoded provider/model names in core, and the other is a minor performance note about an unconditional hook call whose result is discarded for most callers. Neither affects correctness.

src/agents/pi-embedded-runner/model.ts — the unconditional plugin-dynamic-model resolution (line 570) and the hardcoded provider/model check (lines 529–534).

Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/agents/pi-embedded-runner/model.ts
Line: 529-534

Comment:
**Hardcoded provider/model special case in core**

This function names `openai-codex` and `gpt-5.4` directly in core model resolution. The project's architecture rules prohibit hardcoded provider/model special cases in core when a manifest, capability, or plugin-owned contract can express the same behavior. A manifest-level flag such as `preferRuntimeContextWindow: true` or a plugin-owned hook would let the openai-codex plugin own this preference rather than requiring a named core carve-out for each model variant.

**Context Used:** CLAUDE.md ([source](https://app.greptile.com/review/custom-context?memory=fd949e91-5c3a-4ab5-90a1-cbe184fd6ce8))

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/agents/pi-embedded-runner/model.ts
Line: 570-573

Comment:
**Plugin dynamic hook fires for every explicitly-resolved model**

`resolvePluginDynamicModelWithRegistry` is called unconditionally before the `if (explicitModel?.kind === "resolved")` guard at line 571. For every provider whose model has an explicit registry entry but is not `openai-codex/gpt-5.4`, the hook fires and its result is immediately discarded. Guarding the call avoids the wasted `runProviderDynamicModel` invocation for all other providers.

```suggestion
  const pluginDynamicModel = shouldCompareOpenAICodexRuntimeResolvedModel(normalizedParams)
    ? resolvePluginDynamicModelWithRegistry(normalizedParams)
    : explicitModel?.kind === "resolved"
      ? undefined
      : resolvePluginDynamicModelWithRegistry(normalizedParams);
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "Agents: prefer runtime codex gpt-5.4 met..." | Re-trigger Greptile

Comment thread src/agents/pi-embedded-runner/model.ts Outdated
Comment thread src/agents/pi-embedded-runner/model.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2084593780

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/pi-embedded-runner/model.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 794570ffb9

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/pi-embedded-runner/model.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 49ef268816

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/pi-embedded-runner/model.ts Outdated
@ruclaw7
Copy link
Copy Markdown
Contributor Author

ruclaw7 commented Apr 8, 2026

Follow-up pass complete: I addressed the later Codex review note in 467fc20a28, resolved the remaining review thread, and pushed the update to the PR branch.

@ruclaw7 ruclaw7 changed the title Agents: prefer runtime metadata for codex gpt-5.4 Agents: prefer runtime-resolved metadata for explicit codex gpt-5.4 Apr 8, 2026
@obviyus obviyus self-assigned this Apr 8, 2026
@obviyus obviyus force-pushed the fix/codex-gpt54-resolution branch from e44f4e2 to 78dfce3 Compare April 8, 2026 02:44
Copy link
Copy Markdown
Contributor

@obviyus obviyus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed latest changes; landing now.

@obviyus obviyus merged commit 81969c7 into openclaw:main Apr 8, 2026
8 checks passed
@obviyus
Copy link
Copy Markdown
Contributor

obviyus commented Apr 8, 2026

Landed on main.

Thanks @ruclaw7.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 78dfce34d7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

) {
return explicitModel.model;
}
const pluginDynamicModel = resolvePluginDynamicModelWithRegistry(normalizedParams);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Pass workspaceDir into runtime model lookup

The new explicit-model runtime-comparison path calls resolvePluginDynamicModelWithRegistry(...) without propagating the workspace that was just used by shouldCompareProviderRuntimeResolvedModel(...). In this flow, shouldPreferProviderRuntimeResolvedModel is evaluated against cfg.agents.defaults.workspace, but runProviderDynamicModel still resolves provider plugins from the active global registry workspace when workspaceDir is omitted, so a workspace-installed provider can opt in to comparison and then have dynamic resolution run in a different workspace (or not run at all). Fresh evidence here is that this mismatch is now reachable from the new explicit branch at resolveModelWithRegistry, so explicit model comparisons can silently fall back to stale metadata in multi-workspace setups.

Useful? React with 👍 / 👎.

eleqtrizit pushed a commit that referenced this pull request Apr 8, 2026
* Agents: prefer runtime codex gpt-5.4 metadata

* Agents: move codex gpt-5.4 override into provider hook

* fix: repair codex runtime preference hooks

* fix: use workspace dir for codex runtime preference

* test: cover codex workspace dir hook

* fix: prefer codex gpt-5.4 runtime metadata (#62694) (thanks @ruclaw7)

---------

Co-authored-by: Rudi Cilibrasi <cilibrar@gmail.com>
Co-authored-by: Rudi Cilibrasi <rudi@metagood.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
lovewanwan pushed a commit to lovewanwan/openclaw that referenced this pull request Apr 28, 2026
…ruclaw7)

* Agents: prefer runtime codex gpt-5.4 metadata

* Agents: move codex gpt-5.4 override into provider hook

* fix: repair codex runtime preference hooks

* fix: use workspace dir for codex runtime preference

* test: cover codex workspace dir hook

* fix: prefer codex gpt-5.4 runtime metadata (openclaw#62694) (thanks @ruclaw7)

---------

Co-authored-by: Rudi Cilibrasi <cilibrar@gmail.com>
Co-authored-by: Rudi Cilibrasi <rudi@metagood.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
ogt-redknie pushed a commit to ogt-redknie/OPENX that referenced this pull request May 2, 2026
…ruclaw7)

* Agents: prefer runtime codex gpt-5.4 metadata

* Agents: move codex gpt-5.4 override into provider hook

* fix: repair codex runtime preference hooks

* fix: use workspace dir for codex runtime preference

* test: cover codex workspace dir hook

* fix: prefer codex gpt-5.4 runtime metadata (openclaw#62694) (thanks @ruclaw7)

---------

Co-authored-by: Rudi Cilibrasi <cilibrar@gmail.com>
Co-authored-by: Rudi Cilibrasi <rudi@metagood.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
github-actions Bot pushed a commit to Desicool/openclaw that referenced this pull request May 9, 2026
…ruclaw7)

* Agents: prefer runtime codex gpt-5.4 metadata

* Agents: move codex gpt-5.4 override into provider hook

* fix: repair codex runtime preference hooks

* fix: use workspace dir for codex runtime preference

* test: cover codex workspace dir hook

* fix: prefer codex gpt-5.4 runtime metadata (openclaw#62694) (thanks @ruclaw7)

---------

Co-authored-by: Rudi Cilibrasi <cilibrar@gmail.com>
Co-authored-by: Rudi Cilibrasi <rudi@metagood.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Installed 2026.3.24 still resolves openai-codex/gpt-5.4 to ~266k unless models.providers override is added

3 participants