fix(agents): deduplicate user messages from model-fallback retries#52903
fix(agents): deduplicate user messages from model-fallback retries#52903tyeth-ai-assisted wants to merge 3 commits intoopenclaw:mainfrom
Conversation
When the primary model fails (429/rate limit) and OpenClaw falls back through the candidate chain, each runEmbeddedAttempt call writes a new user message to the session JSONL via activeSession.prompt(). The error assistant messages between them get stripped during API input construction (empty content), leaving N consecutive copies of the same user message — one per provider in the fallback chain. Two-layer fix: 1. session-manager-init: Strip trailing orphaned user messages before retry, preventing the duplicate from being persisted. 2. openai-ws-stream: Collapse consecutive user messages in convertMessagesToInputItems as a safety net for historical dupes. Fixes openclaw#31101, openclaw#46005 Related: openclaw#39536
Previous version collapsed ALL consecutive user messages. Now only deduplicates when content is identical (fingerprint match), preserving distinct multi-message sends.
|
Update: The HTTP path ( Companion PR for the HTTP path fix: earendil-works/pi#2547 Also updated the dedup logic to only collapse consecutive user messages with identical text content (fingerprint match), preserving distinct multi-message sends. |
|
Closing this as duplicate or superseded after Codex automated review. PR #52903 should close as superseded by #63696. The newer PR is the focused, non-draft vehicle for the same fallback-retry duplicate user persistence bug, while #52903 is an older draft that also carries an unrelated fork release workflow. Current main still lacks the run-scoped suppression, so this is not an implemented-on-main close. Best possible solution: Close #52903 and keep the remaining implementation/review on #63696. If maintainers want the historical duplicate-input safety net from #52903, fold it into the current What I checked:
So I’m closing this here and keeping the remaining discussion on the canonical linked item. Codex Review notes: model gpt-5.5, reasoning high; reviewed against 6cd047e7c270. |
Summary
Two-layer fix
session-manager-init.ts: Strip trailing orphaned user messages (after last assistant) before retry — prevents duplicates from being persistedopenai-ws-stream.ts: Collapse consecutive user messages inconvertMessagesToInputItems— safety net for historical duplicates already in JSONLTest plan
Root cause trace
```
runWithModelFallback (for each candidate)
→ runFallbackAttempt → runEmbeddedAttempt
→ activeSession.prompt(effectivePrompt) // writes user msg to session
→ model returns error → assistant(error) written
→ next candidate → same prompt() call → another user msg written
```
The orphan check at attempt.ts:2773 only catches consecutive user messages, but the error assistant entries between fallback attempts defeat it.
Fixes #31101, #46005
Related: #39536
/cc @tyeth
🤖 Generated with Claude Code