fix: Keep incomplete-thinking messages instead of dropping#60667
fix: Keep incomplete-thinking messages instead of dropping#60667progamman wants to merge 3 commits intoopenclaw:mainfrom
Conversation
Root cause: UI used lifetime accumulated tokens (totalTokens) for context percentage, which stays high after compaction. Should use current window tokens (post-compaction) instead. Fix: 1. Add currentWindowTokens field to GatewaySessionRow type 2. Gateway computes it: when totalTokensFresh=false (post-compaction), use transcriptUsage.totalTokens (current window), else use totalTokens 3. UI uses currentWindowTokens for context % calculation, with fallback to totalTokens for backwards compatibility 4. Remove early return when totalTokensFresh=false - now shows correct post-compaction percentage via currentWindowTokens Fixes openclaw#48252
…g of newText Root cause: Recovery logic used simple 'includes' checks which failed when: 1. newText pre-existed in file (false positive success) 2. oldText was substring of newText (e.g. 'foo' -> 'foobar') Fix: Use before/after occurrence counts for robust detection: 1. Count newText occurrences before and after edit 2. Verify newText count increased (not just present) 3. Strip newText when checking oldText to handle substring case 4. Verify oldText count decreased when original available Fixes openclaw#54455, fixes openclaw#49363
Problem: When thinking block was incomplete (e.g. timeout), the message was dropped entirely. This caused sent PR messages to disappear without any user notification. Fix: Treat incomplete-thinking like incomplete-text - keep the message and set prefill=true so the model can resume from where it left off. This is safer because: 1. User gets a response instead of silence 2. Model can potentially continue with prefill 3. No information is lost
Greptile SummaryThis PR bundles three separate fixes: (1) keep incomplete-thinking assistant messages with
Confidence Score: 2/5Not safe to merge: the core thinking change breaks two existing tests and may cause API errors when unsigned thinking blocks are sent back to the model. A P1 finding in the primary feature being changed — the incomplete-thinking recovery path — both breaks existing test assertions and risks provider-side validation errors for unsigned thinking blocks. The fix intent is correct (avoid silent message loss), but the implementation needs to either strip unsigned blocks before keeping the turn or update the tests to document that the function's contract has changed. src/agents/pi-embedded-runner/thinking.ts and its companion test file src/agents/pi-embedded-runner/thinking.test.ts need the most attention. Prompt To Fix All With AIThis is a comment left during a code review.
Path: src/agents/pi-embedded-runner/thinking.ts
Line: 207-208
Comment:
**Existing tests will fail; unsigned thinking blocks are not safe to keep**
The two existing tests in `thinking.test.ts` explicitly verify the opposite behavior:
- `"drops the latest assistant message when the thinking block is unsigned"` (line 109) asserts `result.messages` drops the assistant turn and `result.prefill === false`.
- `"preserves later turns when dropping an incomplete assistant message"` (line 123) does the same with a trailing user turn.
Both will now fail because `incomplete-thinking` (unsigned blocks) is redirected into the `prefill: true` branch instead of the drop branch.
Beyond the test failures, passing unsigned thinking blocks (`thinkingSignature`/`signature` absent) back to the Anthropic API in conversation history will cause a provider-side validation error — the API requires thinking blocks to be signed before they can appear in prior turns. The `prefill: true` path keeps the messages as-is (`return { messages, prefill: true }`), so those unsigned blocks would reach the wire untransformed. The tests and the API contract both point to dropping being the correct recovery action here.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "fix: Keep incomplete-thinking messages i..." | Re-trigger Greptile |
| if (assessment === "incomplete-text" || assessment === "incomplete-thinking") { | ||
| return { messages, prefill: true }; |
There was a problem hiding this comment.
Existing tests will fail; unsigned thinking blocks are not safe to keep
The two existing tests in thinking.test.ts explicitly verify the opposite behavior:
"drops the latest assistant message when the thinking block is unsigned"(line 109) assertsresult.messagesdrops the assistant turn andresult.prefill === false."preserves later turns when dropping an incomplete assistant message"(line 123) does the same with a trailing user turn.
Both will now fail because incomplete-thinking (unsigned blocks) is redirected into the prefill: true branch instead of the drop branch.
Beyond the test failures, passing unsigned thinking blocks (thinkingSignature/signature absent) back to the Anthropic API in conversation history will cause a provider-side validation error — the API requires thinking blocks to be signed before they can appear in prior turns. The prefill: true path keeps the messages as-is (return { messages, prefill: true }), so those unsigned blocks would reach the wire untransformed. The tests and the API contract both point to dropping being the correct recovery action here.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/pi-embedded-runner/thinking.ts
Line: 207-208
Comment:
**Existing tests will fail; unsigned thinking blocks are not safe to keep**
The two existing tests in `thinking.test.ts` explicitly verify the opposite behavior:
- `"drops the latest assistant message when the thinking block is unsigned"` (line 109) asserts `result.messages` drops the assistant turn and `result.prefill === false`.
- `"preserves later turns when dropping an incomplete assistant message"` (line 123) does the same with a trailing user turn.
Both will now fail because `incomplete-thinking` (unsigned blocks) is redirected into the `prefill: true` branch instead of the drop branch.
Beyond the test failures, passing unsigned thinking blocks (`thinkingSignature`/`signature` absent) back to the Anthropic API in conversation history will cause a provider-side validation error — the API requires thinking blocks to be signed before they can appear in prior turns. The `prefill: true` path keeps the messages as-is (`return { messages, prefill: true }`), so those unsigned blocks would reach the wire untransformed. The tests and the API contract both point to dropping being the correct recovery action here.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c60635da5b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| // Recover only when: newText added AND oldText decreased/absent | ||
| const recovered = editAddedNewText && oldTextDecreasedOrAbsent; | ||
| if (!recovered) { |
There was a problem hiding this comment.
Preserve recovery for deletion edits
didEditLikelyApply now gates recovery on editAddedNewText, but for deletion edits (newText === "") countOccurrences returns 0 both before and after, so editAddedNewText is always false. That makes recovery impossible for successful delete operations that throw after writing, causing false failures instead of the previous recovered success path.
Useful? React with 👍 / 👎.
1. openclaw#60667 - Properly drop incomplete-thinking (unsigned) to avoid API errors 2. openclaw#60664 - Handle deletion edits (empty newText) in edit recovery 3. openclaw#60649 - Fix currentWindowTokens to use transcriptUsage?.totalTokensFresh All fixes address Greptile review feedback.
Update: v2 branch with fixCreated v2 branch: progamman/fix/incomplete-thinking-drop-v2 Fixed: The incomplete-thinking (unsigned) messages are now properly dropped instead of kept with prefill=true. This fixes the test failures and avoids API validation errors from unsigned thinking blocks. The new branch is ready for review. |
|
👋 Gentle reminder - this PR is awaiting review. Please let us know if any changes are needed. Thanks! |
|
Closing this as not reproducible on current Current main no longer matches the PR's reported symptom: incomplete/empty assistant turns are surfaced to the user as explicit error payloads instead of disappearing silently. The PR's titled change also conflicts with the maintained unsigned-thinking recovery contract, and the author already indicated the real follow-up lived on a separate v2 branch. What I checked:
Review notes: reviewed against 6bc0dc8fb671. |
Problem: When thinking block was incomplete (e.g. timeout), the message was dropped entirely. This caused sent messages to disappear without any user notification.
Fix: Treat incomplete-thinking like incomplete-text - keep the message and set prefill=true so the model can resume from where it left off.
This is safer because:
Related: Messages about PRs were disappearing due to MiniMax timeouts causing incomplete thinking blocks.