[codex] Fix commitments extractor model selection#75347
[codex] Fix commitments extractor model selection#75347clawsweeper[bot] merged 1 commit intomainfrom
Conversation
|
Codex review: passed for ClawSweeper automerge. What this changes: The PR updates hidden commitments extraction to resolve and pass the configured agent/default provider and model, adds an agent-scoped cooldown for terminal auth/model failures, adds runtime regression tests, and records the fix in the changelog. Automerge follow-up: No repair job is needed: this automerge-opted PR has no review findings, is mergeable, and should be handled by exact-head CI plus the configured automerge path. Security review: Security review cleared: The diff only changes model routing, local cooldown state, tests, and changelog text; it does not add dependencies, workflows, secret access, or new external code execution. Review detailsBest possible solution: Land this focused commitments-runtime fix after exact-head checks and mergeability are accepted, then let it close the linked bug; keep broader background retry or circuit-breaker policy work separate. Do we have a high-confidence way to reproduce the issue? Yes. The linked bug provides concrete config and log steps, and current main gives a static reproduction: enable commitments with Is this the best way to solve the issue? Yes. Reusing What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against bd20f8e07e92. |
|
@clawsweeper automerge |
|
🦞🦞 I added Draft PRs stay fix-only until GitHub marks them ready for review. A maintainer can pause this with |
|
🦞🦞 Source: The automerge loop is complete. |
Summary
Fixes #75334.
The commitments hidden extractor now resolves the owning agent/default model with the same configured model resolver used by normal agent runs before it calls the embedded PI runner. That preserves
openai-codex/gpt-5.5and other configured provider/model refs instead of letting the embedded runner fall back to directopenai/gpt-5.5.The extractor also opens a short agent-scoped cooldown after terminal model/auth failures, so a bad config does not repeatedly spend hidden extraction attempts or spam the same background error. The cooldown only suppresses the affected agent; other agents can still enqueue extraction.
Root Cause
defaultExtractBatchcalledrunEmbeddedPiAgentwithoutproviderormodel, so the embedded runner used its static defaults. On OAuth-only OpenAI Codex setups, that became direct OpenAI and produced repeated missing API key errors.Validation
pnpm test src/commitments/runtime.test.ts src/commitments/extraction.test.ts src/commitments/commitments-full-chain.integration.test.tspnpm test src/auto-reply/reply/agent-runner.runreplyagent.e2e.test.tspnpm exec oxfmt --check --threads=1 src/commitments/runtime.ts src/commitments/runtime.test.tspnpm check:changed