Summary
When Telegram streaming is enabled together with visible reasoning (reasoning=true / thinking on), intermediate toolUse turns are emitted to user chat as message_end block replies.
This leaks internal progress text like "让我检查状态…" / "让我等待完成…" into the visible conversation, causing the user-visible final answer flow to be interrupted/overwritten in practice.
Why this is a regression-like behavior
This reproduces on openclaw 2026.2.22-2, whose npm gitHead is:
45febecf2a2d91fd1a378bb2cae38ec21e71857e
And this gitHead already contains the prior Telegram fixes:
#17973 (dddb1bc...) ✅ ancestor
#20774 (ab256b8...) ✅ ancestor
#22613 (8b1fe0d...) ✅ ancestor
#23202 (63b4c50...) ✅ ancestor
So this is not an “old version missing fix” report.
Steps to reproduce
- Configure Telegram channel with streaming enabled (block/partial behavior is enough to hit this path).
- Enable reasoning/thinking visibility.
- Trigger a run where assistant performs tool polling/execution before final answer (e.g.
exec + repeated process:poll).
- Observe visible Telegram output sequence.
Expected behavior
- Intermediate
toolUse turns should stay internal.
- User should only see intended answer/reasoning messages, without tool-step status chatter being emitted as final chat messages.
Actual behavior
- Intermediate
toolUse text is emitted to channel as message_end block replies.
- Final answer appears interleaved with these status lines, creating "final output disappears / replaced by repeated progress lines / then reappears" user perception.
OpenClaw version
2026.2.22-2
- npm metadata:
{ "version": "2026.2.22-2", "gitHead": "45febecf2a2d91fd1a378bb2cae38ec21e71857e" }
Operating system
Install method
- npm global package (
~/.local/lib/node_modules/openclaw)
Logs, screenshots, and evidence
1) Session evidence showing intermediate toolUse status chatter
From local session transcript (~/.openclaw/agents/main/sessions/0e9e7c38-d6d5-4993-95c3-04e08d35a897.jsonl):
stopReason:"toolUse" with text "让我检查状态:1"
- Later merged text:
"让我检查状态:2 ... 让我检查状态:3 ... 最终结论..."
From another long-running transcript (~/.openclaw/agents/coder/sessions/b75fc08b-0210-45bf-b67c-5303bc48a230.jsonl):
- repeated assistant
stopReason:"toolUse" texts:
"任务仍在生成。让我检查进程状态:"
"任务正在读取所有文档。让我继续等待:"
"让我直接检查文件:"
"任务仍在运行。让我检查最终状态:"
2) Code-path evidence
src/agents/pi-embedded-subscribe.handlers.messages.ts (handleMessageEnd) currently emits onBlockReply for message_end text without excluding intermediate toolUse assistant turns that include tool-call blocks.
That allows toolUse progress text to be delivered as user-visible message blocks.
3) Reproducible failing test before fix
Added targeted test (locally) to encode expected behavior:
src/agents/pi-embedded-subscribe.subscribe-embedded-pi-session.suppresses-message-end-block-replies-message-tool.test.ts
Before fix, it fails with:
- expected
onBlockReply not called
- actual called with text
"让我检查状态:"
Impact and severity
- Affected: Telegram users with stream + reasoning flows involving tool calls.
- Severity: Medium-High (user-visible response corruption/noise and confusing output order).
- Frequency: High for workflows that poll tools before final answer.
- Consequence: perceived output overwrite/regression and reduced trust/readability.
Additional information
Related historical and active items:
#17935, #17973, #20774, #22613, #23202
#19180, #19193, #20568
This report is specifically about intermediate toolUse message_end emission into Telegram-visible replies.
Summary
When Telegram streaming is enabled together with visible reasoning (
reasoning=true/ thinking on), intermediatetoolUseturns are emitted to user chat asmessage_endblock replies.This leaks internal progress text like "让我检查状态…" / "让我等待完成…" into the visible conversation, causing the user-visible final answer flow to be interrupted/overwritten in practice.
Why this is a regression-like behavior
This reproduces on
openclaw 2026.2.22-2, whose npmgitHeadis:45febecf2a2d91fd1a378bb2cae38ec21e71857eAnd this gitHead already contains the prior Telegram fixes:
#17973(dddb1bc...) ✅ ancestor#20774(ab256b8...) ✅ ancestor#22613(8b1fe0d...) ✅ ancestor#23202(63b4c50...) ✅ ancestorSo this is not an “old version missing fix” report.
Steps to reproduce
exec+ repeatedprocess:poll).Expected behavior
toolUseturns should stay internal.Actual behavior
toolUsetext is emitted to channel as message_end block replies.OpenClaw version
2026.2.22-2{ "version": "2026.2.22-2", "gitHead": "45febecf2a2d91fd1a378bb2cae38ec21e71857e" }Operating system
Install method
~/.local/lib/node_modules/openclaw)Logs, screenshots, and evidence
1) Session evidence showing intermediate
toolUsestatus chatterFrom local session transcript (
~/.openclaw/agents/main/sessions/0e9e7c38-d6d5-4993-95c3-04e08d35a897.jsonl):stopReason:"toolUse"with text"让我检查状态:1""让我检查状态:2 ... 让我检查状态:3 ... 最终结论..."From another long-running transcript (
~/.openclaw/agents/coder/sessions/b75fc08b-0210-45bf-b67c-5303bc48a230.jsonl):stopReason:"toolUse"texts:"任务仍在生成。让我检查进程状态:""任务正在读取所有文档。让我继续等待:""让我直接检查文件:""任务仍在运行。让我检查最终状态:"2) Code-path evidence
src/agents/pi-embedded-subscribe.handlers.messages.ts(handleMessageEnd) currently emitsonBlockReplyformessage_endtext without excluding intermediatetoolUseassistant turns that include tool-call blocks.That allows
toolUseprogress text to be delivered as user-visible message blocks.3) Reproducible failing test before fix
Added targeted test (locally) to encode expected behavior:
src/agents/pi-embedded-subscribe.subscribe-embedded-pi-session.suppresses-message-end-block-replies-message-tool.test.tsBefore fix, it fails with:
onBlockReplynot called"让我检查状态:"Impact and severity
Additional information
Related historical and active items:
#17935,#17973,#20774,#22613,#23202#19180,#19193,#20568This report is specifically about intermediate
toolUsemessage_end emission into Telegram-visible replies.