You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Users who manually configured models.providers.openai-codex before the recent Codex OAuth fixes can remain broken even after upgrading to a version that includes:
In our case, Codex OAuth continued failing until we manually removed the legacy openai-codex provider override from openclaw.json.
Once that override was removed, openai-codex/gpt-5.4 worked immediately.
This appears to be a stale manual-config shadowing problem, not a fresh OAuth bug.
Why this matters
A lot of users likely manually set up Codex weeks ago during the period when people were experimenting with:
ChatGPT/Codex OAuth
custom models.providers.openai-codex
explicit baseUrl
explicit api
manually defined gpt-5.3-codex / related model entries
Those manual overrides can survive upgrades and silently block the newer built-in Codex OAuth provider behavior.
So the upstream fixes may be present, but affected users still stay broken.
Environment
macOS
local gateway
openai-codex:default auth profile with mode: "oauth"
primary model set to openai-codex/gpt-5.4
upgraded from an older manual-Codex-config era setup
Symptoms
Before removing the override:
Codex appeared configured correctly
requests failed with 401 / scope-related behavior
model often fell through to fallback providers
behavior looked like “Codex OAuth still broken”
After removing the override:
same auth profile
same account
same target model
openai-codex/gpt-5.4 worked immediately
Root cause
We had a legacy manual config block under:
models.providers.openai-codex
That override was still forcing the old explicit provider shape, instead of letting the built-in Codex OAuth provider synthesize the correct behavior after the recent fixes.
Repro sketch
On an older OpenClaw version, manually configure models.providers.openai-codex
Add explicit base URL / API / model entries for Codex
Remove the manual models.providers.openai-codex override
Retry
Observe success
Expected behavior
OpenClaw should detect that a legacy manual openai-codex provider override may be shadowing the built-in OAuth provider and do one of:
Warn loudly
“Legacy manual models.providers.openai-codex override detected. This may break built-in Codex OAuth behavior after recent fixes.”
Offer an automatic migration
remove or rewrite stale openai-codex provider config to the supported modern form
Teach openclaw doctor to catch this
detect manual models.providers.openai-codex
check whether an OAuth profile exists for openai-codex
warn if the manual override is likely shadowing the built-in provider
suggest or optionally apply a fix
Strong suggestion: update openclaw doctor
This feels like exactly the kind of thing doctor should catch.
Suggested doctor check:
if auth.profiles contains an OAuth profile for openai-codex
and config also contains a manual models.providers.openai-codex
then warn that the manual override may be stale and may block the built-in Codex OAuth provider
Suggested output:
Detected manual models.providers.openai-codex override alongside Codex OAuth auth profile.
This may be a legacy configuration that shadows the built-in Codex OAuth provider.
Recent Codex OAuth fixes may not apply until this override is removed or migrated.
Even better: doctor --fix could offer to back up config and remove the stale override automatically.
Related fixes / context
This issue looks like a downstream / legacy-config follow-up to:
Those fixes appear to solve the runtime/provider logic, but users with older manual Codex config can still be stuck because their config shadows the repaired built-in path.
Workaround
Remove the manual models.providers.openai-codex override from config and restart OpenClaw.
In our case, that was the key step that made openai-codex/gpt-5.4 start working.
Summary
Users who manually configured
models.providers.openai-codexbefore the recent Codex OAuth fixes can remain broken even after upgrading to a version that includes:In our case, Codex OAuth continued failing until we manually removed the legacy
openai-codexprovider override fromopenclaw.json.Once that override was removed,
openai-codex/gpt-5.4worked immediately.This appears to be a stale manual-config shadowing problem, not a fresh OAuth bug.
Why this matters
A lot of users likely manually set up Codex weeks ago during the period when people were experimenting with:
models.providers.openai-codexbaseUrlapigpt-5.3-codex/ related model entriesThose manual overrides can survive upgrades and silently block the newer built-in Codex OAuth provider behavior.
So the upstream fixes may be present, but affected users still stay broken.
Environment
openai-codex:defaultauth profile withmode: "oauth"openai-codex/gpt-5.4Symptoms
Before removing the override:
After removing the override:
openai-codex/gpt-5.4worked immediatelyRoot cause
We had a legacy manual config block under:
models.providers.openai-codexThat override was still forcing the old explicit provider shape, instead of letting the built-in Codex OAuth provider synthesize the correct behavior after the recent fixes.
Repro sketch
models.providers.openai-codexopenai-codex/gpt-5.4models.providers.openai-codexoverrideExpected behavior
OpenClaw should detect that a legacy manual
openai-codexprovider override may be shadowing the built-in OAuth provider and do one of:Warn loudly
models.providers.openai-codexoverride detected. This may break built-in Codex OAuth behavior after recent fixes.”Offer an automatic migration
openai-codexprovider config to the supported modern formTeach
openclaw doctorto catch thismodels.providers.openai-codexopenai-codexStrong suggestion: update
openclaw doctorThis feels like exactly the kind of thing
doctorshould catch.Suggested doctor check:
auth.profilescontains an OAuth profile foropenai-codexmodels.providers.openai-codexSuggested output:
Even better:
doctor --fixcould offer to back up config and remove the stale override automatically.Related fixes / context
This issue looks like a downstream / legacy-config follow-up to:
Those fixes appear to solve the runtime/provider logic, but users with older manual Codex config can still be stuck because their config shadows the repaired built-in path.
Workaround
Remove the manual
models.providers.openai-codexoverride from config and restart OpenClaw.In our case, that was the key step that made
openai-codex/gpt-5.4start working.