Bug type
Functional gap (feature works partially but in a misleading way)
Beta release blocker
No
Summary
After completing openclaw models auth login --provider openai-codex successfully, the new OAuth profile is written to auth-profiles.json under profiles, but the per-agent order field is never populated. As a result, every subsequent agent call fails with FailoverError: No API key found for provider "openai-codex" and the runtime falls back to the next provider in the chain. The user has no way to know this is wrong because openclaw models list reports Auth: yes for the codex model.
The workaround is to manually run openclaw models auth order set --agent <agent> --provider openai-codex <profile-id>, which is undocumented in concepts/oauth.md (the docs say OAuth login requires no further configuration).
Steps to reproduce
- Start with a fresh agent that has no codex profile (
auth-profiles.json has profiles: {}, order: {}).
- Run
openclaw models auth login --provider openai-codex and complete the browser OAuth flow successfully.
- Verify the profile was written:
Output shows
openai-codex/gpt-5.4 ... yes default,configured (Auth column is yes).
- Configure the agent's primary model to
openai-codex/gpt-5.4 (this is the default after a fresh models auth login).
- Send any agent message:
openclaw agent --agent main --message 'test'
- Observe in gateway logs:
[diagnostic] lane task error: error="FailoverError: No API key found for provider \"openai-codex\". Auth store: ~/.openclaw/agents/main/agent/auth-profiles.json (agentDir: ~/.openclaw/agents/main/agent). Configure auth for this agent (openclaw agents add <id>) or copy auth-profiles.json from the main agentDir."
[model-fallback] decision=candidate_failed requested=openai-codex/gpt-5.4 candidate=openai-codex/gpt-5.4 reason=auth next=deepseek/deepseek-chat
- Run
openclaw models auth order get --agent main --provider openai-codex — the output is Order override: (none), confirming order is empty.
- Run the workaround:
openclaw models auth order set --agent main --provider openai-codex openai-codex:<REDACTED-EMAIL>
- Re-send the agent message — codex now responds successfully without falling back, and the gateway log no longer prints
No API key found.
Expected behavior
openclaw models auth login --provider openai-codex should leave the agent in a fully working state. The documented contract in docs/concepts/oauth.md says:
登录后凭证自动存储在 auth profile 中
如果未指定 profile,系统按配置的 order 使用第一个可用凭证
默认行为存在,无需强制配置 order
This implies one of two valid behaviors:
- Option A:
models auth login writes both profiles and order (and lastGood) atomically, so the new profile is immediately routable.
- Option B: When
order is empty for a provider, the runtime's credential resolver falls back to "use any available profile in profiles matching the provider", consistent with the docs' "默认行为存在".
Either fix would make the OAuth flow work end-to-end as documented.
Actual behavior
models auth login only writes profiles. The runtime's credential resolver (findCredentialFor("openai-codex")) returns missing_credential even though a valid profile exists in profiles. According to docs/auth-credential-semantics.md, missing_credential and excluded_by_auth_order are distinct error codes — we are hitting missing_credential, not excluded_by_auth_order, which suggests the resolver is not even consulting profiles when order["openai-codex"] is unset.
The user-visible result: every codex call falls back to the next provider in the chain (in our case deepseek/deepseek-chat), which then often fails for unrelated reasons (e.g. content moderation 400 Content Exists Risk on Chinese input), masking the root cause and making this hard to diagnose.
OpenClaw version
2026.4.5 (3e72c03)
Operating system
WSL2 Ubuntu 24.04.1 LTS on Windows 11 (host: AMD Ryzen 7 5700X)
Install method
curl install.sh
Model
openai-codex/gpt-5.4
Provider / routing chain
primary=openai-codex/gpt-5.4, fallbacks=[deepseek/deepseek-chat, minimax/MiniMax-M2.7]
Additional provider/model setup details
OAuth flow used: built-in openclaw models auth login --provider openai-codex with browser callback on localhost:1455 (standard PKCE flow). No external @openai/codex CLI involved (the system has the npm package available but it errors with Missing optional dependency @openai/codex-linux-x64 when invoked from WSL — unrelated to this bug).
The same issue reproduces on two independent WSL2 distributions (Ubuntu and Ubuntu-OpenClaw2) with two separate gateway instances on different ports, both authenticated to the same OpenAI account via the same OAuth flow.
Logs, screenshots, and evidence
# === auth-profiles.json state immediately after `models auth login` (before workaround) ===
$ python3 -c "import json; d=json.load(open('~/.openclaw/agents/main/agent/auth-profiles.json'.replace('~','/home/user'))); print('profiles:', list(d.get('profiles',{}).keys())); print('order:', d.get('order')); print('lastGood:', d.get('lastGood'))"
profiles: ['openai-codex:<REDACTED-EMAIL>']
order: {}
lastGood: {}
# === models list says Auth: yes (misleading) ===
$ openclaw models list
Model Input Ctx Local Auth Tags
openai-codex/gpt-5.4 text+image 266k no yes default,configured
deepseek/deepseek-chat text 128k no yes fallback#1
# === models auth order get says (none) ===
$ openclaw models auth order get --agent main --provider openai-codex
Agent: main
Provider: openai-codex
Auth file: ~/.openclaw/agents/main/agent/auth-profiles.json
Order override: (none)
# === gateway log when sending a query ===
[diagnostic] lane task error: lane=main durationMs=1766 error="FailoverError: No API key found for provider \"openai-codex\". Auth store: ~/.openclaw/agents/main/agent/auth-profiles.json (agentDir: ~/.openclaw/agents/main/agent). Configure auth for this agent (openclaw agents add <id>) or copy auth-profiles.json from the main agentDir."
[model-fallback] model fallback decision: decision=candidate_failed requested=openai-codex/gpt-5.4 candidate=openai-codex/gpt-5.4 reason=auth next=deepseek/deepseek-chat
# === Workaround ===
$ openclaw models auth order set --agent main --provider openai-codex openai-codex:<REDACTED-EMAIL>
Agent: main
Provider: openai-codex
Order override: openai-codex:<REDACTED-EMAIL>
# === auth-profiles.json after workaround (CLI auto-fills both order AND lastGood) ===
$ python3 -c "..."
profiles: ['openai-codex:<REDACTED-EMAIL>']
order: {'openai-codex': ['openai-codex:<REDACTED-EMAIL>']}
lastGood: {'openai-codex': 'openai-codex:<REDACTED-EMAIL>'}
# === Re-test: codex now responds successfully ===
$ openclaw agent --agent main --message 'what model id are you?'
openai-codex/gpt-5.4
Impact and severity
- Affected: Any user who runs
openclaw models auth login --provider openai-codex to set up Codex via ChatGPT subscription OAuth (the path most users will take, given that it's the documented one in docs/concepts/oauth.md and recommended after Anthropic's April 4 subscription policy change).
- Severity: High — codex appears to be configured but is silently never used. Every call falls through to the next provider, which:
- Wastes the user's ChatGPT Plus/Pro subscription quota (it's never consumed)
- Routes traffic to fallback providers the user did not intend to use (often paid API providers, sometimes content-filtered ones that compound the error)
- Hides the root cause behind whatever error the fallback provider produces
- Frequency: 100% reproducible. Confirmed on two independent OpenClaw instances on different WSL2 distributions.
- Discoverability: Very poor.
openclaw models list reports Auth: yes, and the gateway error message ("Configure auth for this agent (openclaw agents add ) or copy auth-profiles.json from the main agentDir") is misleading because the failing agent IS the main agent. The only reliable diagnostic is openclaw models auth order get, which is not mentioned in onboarding docs.
Additional information
Related issues (same family of credential-resolution bugs):
Why we think this is a real bug, not a configuration mistake:
- The docs in
concepts/oauth.md explicitly say no manual order setup is needed.
- The probe reason code we're hitting is
missing_credential, not excluded_by_auth_order (per auth-credential-semantics.md), which means the runtime is not consulting the profiles map at all when order is empty.
- The CLI command
openclaw models auth order set exists and is the documented way to fix it — but it's nowhere in the OAuth onboarding docs, and a fresh models auth login should not need a follow-up CLI to be functional.
- Multiple users have reported the same symptom across different providers (codex, custom OpenAI-compatible, etc.), suggesting a shared root cause in the credential resolver.
Suggested fix directions (not prescriptive):
- In
models auth login handler, after writing the new profile to profiles, also call the equivalent of models auth order set to populate order["<provider>"] and lastGood["<provider>"] with the new profile id (matching what the manual workaround does).
- OR in the credential resolver, when
order["<provider>"] is empty/missing, fall back to "use the first matching profile in profiles" instead of returning missing_credential.
Happy to provide more logs or test patches if needed. Thanks for the great project.
Bug type
Functional gap (feature works partially but in a misleading way)
Beta release blocker
No
Summary
After completing
openclaw models auth login --provider openai-codexsuccessfully, the new OAuth profile is written toauth-profiles.jsonunderprofiles, but the per-agentorderfield is never populated. As a result, every subsequent agent call fails withFailoverError: No API key found for provider "openai-codex"and the runtime falls back to the next provider in the chain. The user has no way to know this is wrong becauseopenclaw models listreportsAuth: yesfor the codex model.The workaround is to manually run
openclaw models auth order set --agent <agent> --provider openai-codex <profile-id>, which is undocumented inconcepts/oauth.md(the docs say OAuth login requires no further configuration).Steps to reproduce
auth-profiles.jsonhasprofiles: {},order: {}).openclaw models auth login --provider openai-codexand complete the browser OAuth flow successfully.openai-codex/gpt-5.4 ... yes default,configured(Auth column isyes).openai-codex/gpt-5.4(this is the default after a freshmodels auth login).openclaw models auth order get --agent main --provider openai-codex— the output isOrder override: (none), confirmingorderis empty.No API key found.Expected behavior
openclaw models auth login --provider openai-codexshould leave the agent in a fully working state. The documented contract indocs/concepts/oauth.mdsays:This implies one of two valid behaviors:
models auth loginwrites bothprofilesandorder(andlastGood) atomically, so the new profile is immediately routable.orderis empty for a provider, the runtime's credential resolver falls back to "use any available profile inprofilesmatching the provider", consistent with the docs' "默认行为存在".Either fix would make the OAuth flow work end-to-end as documented.
Actual behavior
models auth loginonly writesprofiles. The runtime's credential resolver (findCredentialFor("openai-codex")) returnsmissing_credentialeven though a valid profile exists inprofiles. According todocs/auth-credential-semantics.md,missing_credentialandexcluded_by_auth_orderare distinct error codes — we are hittingmissing_credential, notexcluded_by_auth_order, which suggests the resolver is not even consultingprofileswhenorder["openai-codex"]is unset.The user-visible result: every codex call falls back to the next provider in the chain (in our case
deepseek/deepseek-chat), which then often fails for unrelated reasons (e.g. content moderation400 Content Exists Riskon Chinese input), masking the root cause and making this hard to diagnose.OpenClaw version
2026.4.5 (3e72c03)
Operating system
WSL2 Ubuntu 24.04.1 LTS on Windows 11 (host: AMD Ryzen 7 5700X)
Install method
curl install.sh
Model
openai-codex/gpt-5.4
Provider / routing chain
primary=openai-codex/gpt-5.4, fallbacks=[deepseek/deepseek-chat, minimax/MiniMax-M2.7]
Additional provider/model setup details
OAuth flow used: built-in
openclaw models auth login --provider openai-codexwith browser callback onlocalhost:1455(standard PKCE flow). No external@openai/codexCLI involved (the system has the npm package available but it errors withMissing optional dependency @openai/codex-linux-x64when invoked from WSL — unrelated to this bug).The same issue reproduces on two independent WSL2 distributions (Ubuntu and Ubuntu-OpenClaw2) with two separate gateway instances on different ports, both authenticated to the same OpenAI account via the same OAuth flow.
Logs, screenshots, and evidence
Impact and severity
openclaw models auth login --provider openai-codexto set up Codex via ChatGPT subscription OAuth (the path most users will take, given that it's the documented one indocs/concepts/oauth.mdand recommended after Anthropic's April 4 subscription policy change).openclaw models listreportsAuth: yes, and the gateway error message ("Configure auth for this agent (openclaw agents add ) or copy auth-profiles.json from the main agentDir") is misleading because the failing agent IS the main agent. The only reliable diagnostic isopenclaw models auth order get, which is not mentioned in onboarding docs.Additional information
Related issues (same family of credential-resolution bugs):
Why we think this is a real bug, not a configuration mistake:
concepts/oauth.mdexplicitly say no manualordersetup is needed.missing_credential, notexcluded_by_auth_order(perauth-credential-semantics.md), which means the runtime is not consulting theprofilesmap at all whenorderis empty.openclaw models auth order setexists and is the documented way to fix it — but it's nowhere in the OAuth onboarding docs, and a freshmodels auth loginshould not need a follow-up CLI to be functional.Suggested fix directions (not prescriptive):
models auth loginhandler, after writing the new profile toprofiles, also call the equivalent ofmodels auth order setto populateorder["<provider>"]andlastGood["<provider>"]with the new profile id (matching what the manual workaround does).order["<provider>"]is empty/missing, fall back to "use the first matching profile inprofiles" instead of returningmissing_credential.Happy to provide more logs or test patches if needed. Thanks for the great project.