-
-
Notifications
You must be signed in to change notification settings - Fork 52.6k
Description
Summary
When the OpenClaw TUI streams a response that involves tool calls, already-streamed text is retroactively erased from the terminal at the moment a tool is invoked.
This is distinct from a simple streaming pause — content that was already visibly rendered disappears, leaving the user with no record of what the agent said before making
the tool call.
This is related to #20453 and #15452 but has a more specific root cause: it is not just truncation at the end of a stream, but erasure of previously rendered content
when the assistant transitions from generating text to executing a tool.
Reproducible on 1.2.24 and 1.2.25. Does not reproduce when no tool calls are involved (pure reasoning/knowledge responses stream and persist correctly). Similar
conversations via Telegram do not exhibit this problem.
Steps to reproduce
- Open a session with the OpenClaw TUI
- Ask something that requires both reasoning and tool use (e.g. "help me configure headless Chrome", "analyze my training data")
- Watch the terminal as the response begins streaming
The issue does not reproduce for knowledge-only questions where no tools are called.
Expected behavior
- Pre-tool text streams in and remains visible in the terminal
- When a tool call executes, it appears below or after the already-streamed text
- After the tool returns, the response continues appending below everything already shown
- The user can scroll up and see the full context: pre-tool reasoning → tool call → post-tool response
Actual behavior
- Pre-tool text streams in and is readable mid-generation
- At the moment a tool call is triggered, the already-rendered text is wiped from the terminal
- The screen clears (or the partial message is replaced), the tool executes silently, then the post-tool response appears
- The user loses all visibility into what the agent said or reasoned before the tool call
The GIF below clearly shows the text appearing, then being erased at the tool call boundary — this is not a latency gap, it is content deletion.
OpenClaw version
1.2.25
Operating system
ubuntu 24.04
Install method
npm global
Logs, screenshots, and evidence
Impact and severity
- Affected: TUI only (Telegram and other channel integrations do not exhibit this)
- Severity: Moderate — reasoning and pre-tool context is permanently lost from the session view
- Frequency: Consistent, triggers on every response that includes at least one tool call
- Consequence: Users cannot see the agent's reasoning or pre-tool narration; makes it harder to understand what the agent is doing and why
Additional information
Root Cause Hypothesis
I had claude sonnet 4.6 analyze the gif, and it came up with this (and helped me update the issue body):
The partial assistant message is likely being treated as a speculative/draft render that gets discarded when the final structured message (with embedded tool calls)
arrives from the API. When the tool call is appended to the message object, the TUI re-renders the full message from scratch, clearing what was already painted to the
terminal.
The fix should preserve already-rendered tokens in place, treating the tool call as an append rather than a replacement of the message content.
