-
-
Notifications
You must be signed in to change notification settings - Fork 52.5k
Description
Version: 2026.2.22-2 (not present in 2026.2.21-2)
Channel: WhatsApp
Config: thinking=low, model: claude-opus-4-6 (Anthropic)
Issue 1: Thinking blocks not stripped from delivery
When extended thinking is set to low, thinking blocks produced by the model are not stripped before delivery to WhatsApp. The full reasoning content appears as visible message text, prefixed with "Reasoning:" in italics, before the actual response.
The session history confirms the response contains a "type": "thinking" block alongside the "type": "text" block. The thinking content should be suppressed in channel delivery but is being concatenated into the outbound WhatsApp message.
Steps to reproduce:
- Configure
thinking=low - Use WhatsApp channel
- Send a message that triggers the model to produce a thinking block
- Observe: thinking content is visible in the delivered WhatsApp message
Expected: Thinking blocks are stripped/hidden before delivery. Only the text content block should be sent.
Actual:
Reasoning:
David is saying he's seeing my thinking/reasoning text...
[full internal reasoning visible]
[actual response follows]
Issue 2: Stale text fragments appended to replies
After restarts or in long sessions, stale text fragments from earlier in the conversation (or possibly other sessions) get appended to replies.
This seems to happen when context gets long or after gateway restarts — fragments from earlier conversation turns bleed into new replies.
Note: It's unclear whether Issue 2 is an OpenClaw delivery/session issue or a model-side generation artifact. Including it here for visibility so maintainers can investigate which layer is responsible.
Regression
Neither issue occurs on 2026.2.21-2. Downgrading resolves both.
Workaround: Downgrade to 2026.2.21-2 or set thinking=off (for issue 1).