Bug type
Behavior bug (background task uses wrong model/provider and creates repeated auth/model errors)
Beta release blocker
Possibly — this can silently create repeated background LLM failures after enabling the commitments feature.
Summary
After enabling the commitments feature on an OpenClaw install whose normal/default model is configured for Codex OAuth (openai-codex/gpt-5.5), the commitments extractor spawned repeated background lanes but attempted to use direct OpenAI provider auth (openai) instead of the configured Codex provider (openai-codex).
Because this install has Codex OAuth but no direct OPENAI_API_KEY, every extractor run failed with:
FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access.
The feature produced repeated background auth/model failures rather than inheriting the agent/default model/provider or circuit-breaking after the first terminal auth/config error.
Steps to reproduce
- Run OpenClaw with default/main model configured to a Codex OAuth model, e.g.
openai-codex/gpt-5.5.
- Ensure Codex OAuth is authenticated, but no direct
OPENAI_API_KEY is configured.
- Enable commitments / inferred commitment extraction.
- Send normal chat messages that trigger commitment extraction.
- Inspect gateway logs.
Expected behavior
Commitment extraction should either:
- Use the configured agent/default model and provider (
openai-codex/gpt-5.5 in this case), or
- Have an explicit commitments model setting that supports provider-qualified model IDs, or
- Fail once with a clear configuration error and circuit-break/disable extraction until configuration is fixed.
It should not silently fall back to direct openai when the system is configured for openai-codex OAuth.
Actual behavior
Gateway logs showed repeated commitments lanes failing on direct OpenAI auth:
[diagnostic] lane task error: lane=session:agent:main:commitments:commitments-20254e69-c966-424b-a43a-4042c05f507d durationMs=1317 error="FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access."
[commitments] commitment extraction failed
[diagnostic] lane task error: lane=session:agent:main:commitments:commitments-07ac6deb-2ec0-4cf6-b3e3-cff0afaf23fb durationMs=1260 error="FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access."
[commitments] commitment extraction failed
[diagnostic] lane task error: lane=session:agent:main:commitments:commitments-6c3a9bf4-347c-4860-bb1a-f67f0fe428cf durationMs=1236 error="FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access."
[commitments] commitment extraction failed
This continued across multiple background lanes until commitments were disabled/restarted.
OpenClaw version
2026.4.29
Operating system
macOS arm64
Install method
Global npm / local gateway install
Model
Configured default/main model: openai-codex/gpt-5.5
The commitments extractor appears to use provider openai instead.
Provider / routing chain
Expected:
agent/main -> openai-codex -> gpt-5.5 -> Codex OAuth profile
Observed commitments background lane:
agent/main -> commitments extractor -> openai -> missing OPENAI_API_KEY -> repeated FailoverError
Additional context
This looks related to the broader background-run fragility cluster, but appears to be a distinct commitments-specific provider/model selection bug:
Impact
- Repeated background LLM/auth failures
- Log noise and potential gateway degradation
- Confusing auth diagnostics because the main model/auth path is healthy
- Users with only Codex OAuth may see commitments fail immediately even though normal chat works
Suggested fix direction
- Ensure commitments extraction resolves model/provider through the same configured model routing path as the owning agent/session, including provider-qualified IDs like
openai-codex/gpt-5.5.
- Add a commitments-specific model config only if needed, but make provider family explicit.
- Add a circuit breaker for terminal background auth/config errors (
missing_api_key, auth, format) so the extractor does not repeatedly retry an impossible provider path.
- Add regression coverage for Codex OAuth-only installs with no direct
OPENAI_API_KEY.
Bug type
Behavior bug (background task uses wrong model/provider and creates repeated auth/model errors)
Beta release blocker
Possibly — this can silently create repeated background LLM failures after enabling the commitments feature.
Summary
After enabling the commitments feature on an OpenClaw install whose normal/default model is configured for Codex OAuth (
openai-codex/gpt-5.5), the commitments extractor spawned repeated background lanes but attempted to use direct OpenAI provider auth (openai) instead of the configured Codex provider (openai-codex).Because this install has Codex OAuth but no direct
OPENAI_API_KEY, every extractor run failed with:The feature produced repeated background auth/model failures rather than inheriting the agent/default model/provider or circuit-breaking after the first terminal auth/config error.
Steps to reproduce
openai-codex/gpt-5.5.OPENAI_API_KEYis configured.Expected behavior
Commitment extraction should either:
openai-codex/gpt-5.5in this case), orIt should not silently fall back to direct
openaiwhen the system is configured foropenai-codexOAuth.Actual behavior
Gateway logs showed repeated commitments lanes failing on direct OpenAI auth:
This continued across multiple background lanes until commitments were disabled/restarted.
OpenClaw version
2026.4.29Operating system
macOS arm64
Install method
Global npm / local gateway install
Model
Configured default/main model:
openai-codex/gpt-5.5The commitments extractor appears to use provider
openaiinstead.Provider / routing chain
Expected:
Observed commitments background lane:
Additional context
This looks related to the broader background-run fragility cluster, but appears to be a distinct commitments-specific provider/model selection bug:
Impact
Suggested fix direction
openai-codex/gpt-5.5.missing_api_key,auth,format) so the extractor does not repeatedly retry an impossible provider path.OPENAI_API_KEY.