Skip to content

[Bug]: commitments extractor uses direct OpenAI instead of configured openai-codex model, causing repeated background auth failures #75334

@sene1337

Description

@sene1337

Bug type

Behavior bug (background task uses wrong model/provider and creates repeated auth/model errors)

Beta release blocker

Possibly — this can silently create repeated background LLM failures after enabling the commitments feature.

Summary

After enabling the commitments feature on an OpenClaw install whose normal/default model is configured for Codex OAuth (openai-codex/gpt-5.5), the commitments extractor spawned repeated background lanes but attempted to use direct OpenAI provider auth (openai) instead of the configured Codex provider (openai-codex).

Because this install has Codex OAuth but no direct OPENAI_API_KEY, every extractor run failed with:

FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access.

The feature produced repeated background auth/model failures rather than inheriting the agent/default model/provider or circuit-breaking after the first terminal auth/config error.

Steps to reproduce

  1. Run OpenClaw with default/main model configured to a Codex OAuth model, e.g. openai-codex/gpt-5.5.
  2. Ensure Codex OAuth is authenticated, but no direct OPENAI_API_KEY is configured.
  3. Enable commitments / inferred commitment extraction.
  4. Send normal chat messages that trigger commitment extraction.
  5. Inspect gateway logs.

Expected behavior

Commitment extraction should either:

  1. Use the configured agent/default model and provider (openai-codex/gpt-5.5 in this case), or
  2. Have an explicit commitments model setting that supports provider-qualified model IDs, or
  3. Fail once with a clear configuration error and circuit-break/disable extraction until configuration is fixed.

It should not silently fall back to direct openai when the system is configured for openai-codex OAuth.

Actual behavior

Gateway logs showed repeated commitments lanes failing on direct OpenAI auth:

[diagnostic] lane task error: lane=session:agent:main:commitments:commitments-20254e69-c966-424b-a43a-4042c05f507d durationMs=1317 error="FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access."
[commitments] commitment extraction failed

[diagnostic] lane task error: lane=session:agent:main:commitments:commitments-07ac6deb-2ec0-4cf6-b3e3-cff0afaf23fb durationMs=1260 error="FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access."
[commitments] commitment extraction failed

[diagnostic] lane task error: lane=session:agent:main:commitments:commitments-6c3a9bf4-347c-4860-bb1a-f67f0fe428cf durationMs=1236 error="FailoverError: No API key found for provider "openai". You are authenticated with OpenAI Codex OAuth. Use openai-codex/gpt-5.5, or set OPENAI_API_KEY for direct OpenAI API access."
[commitments] commitment extraction failed

This continued across multiple background lanes until commitments were disabled/restarted.

OpenClaw version

2026.4.29

Operating system

macOS arm64

Install method

Global npm / local gateway install

Model

Configured default/main model: openai-codex/gpt-5.5

The commitments extractor appears to use provider openai instead.

Provider / routing chain

Expected:

agent/main -> openai-codex -> gpt-5.5 -> Codex OAuth profile

Observed commitments background lane:

agent/main -> commitments extractor -> openai -> missing OPENAI_API_KEY -> repeated FailoverError

Additional context

This looks related to the broader background-run fragility cluster, but appears to be a distinct commitments-specific provider/model selection bug:

Impact

  • Repeated background LLM/auth failures
  • Log noise and potential gateway degradation
  • Confusing auth diagnostics because the main model/auth path is healthy
  • Users with only Codex OAuth may see commitments fail immediately even though normal chat works

Suggested fix direction

  • Ensure commitments extraction resolves model/provider through the same configured model routing path as the owning agent/session, including provider-qualified IDs like openai-codex/gpt-5.5.
  • Add a commitments-specific model config only if needed, but make provider family explicit.
  • Add a circuit breaker for terminal background auth/config errors (missing_api_key, auth, format) so the extractor does not repeatedly retry an impossible provider path.
  • Add regression coverage for Codex OAuth-only installs with no direct OPENAI_API_KEY.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions