Bug type
Regression (worked before, now fails)
Summary
Environment
- OpenClaw Version: 2026.3.2
- Model: minimax-cn/MiniMax-M2.5
- Deployment: Docker (openclaw-docker-cn-im)
- Client: openclaw-tui
Description
When using openclaw tui to interact with the agent, responses generated by the model are sometimes not displayed in the TUI interface, even though the model successfully generates a reply.
Steps to Reproduce
- Start OpenClaw container: docker run -d ... openclaw-gateway
- Connect via TUI: openclaw tui
- Send a message that triggers thinking (e.g., a complex prompt)
- Observe that the response is not displayed in TUI
Root Cause
The Gateway relies on the presence of tags in the model's output to determine whether to send stream=assistant events to the TUI client.
When the model outputs text without the tag (which happens when thinking is enabled and the model generates thinking content), the Gateway skips sending the stream=assistant event entirely.
Evidence
From session log (8a5909b9-8993-486a-b92c-3b659775e1af.jsonl):
Working case (first message "你好"):
{"type":"text","text":"你好!有什么我可以帮你的吗?"}
- Gateway sent: seq=9 stream=assistant text=你好!有什么我可以帮你的吗?
- TUI: ✅ Displayed correctly
Broken case (second message with roleplay prompt):
{"type":"text","text":"记住了~不过我要诚实告诉你:我是个AI助手..."}
- Gateway sent: NO stream=assistant event
- TUI: ❌ Not displayed
Suggested Fix
The Gateway should handle responses without tags by:
- Treating all text content (regardless of tag) as valid assistant output
- Or ensuring the model always outputs proper tags
Steps to reproduce
just talk to an agent, then response will not show.
Expected behavior
Actual behavior
OpenClaw version
2026.3.2
Operating system
macOS 26.3
Install method
docker
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response
Bug type
Regression (worked before, now fails)
Summary
Environment
Description
When using openclaw tui to interact with the agent, responses generated by the model are sometimes not displayed in the TUI interface, even though the model successfully generates a reply.
Steps to Reproduce
Root Cause
The Gateway relies on the presence of tags in the model's output to determine whether to send stream=assistant events to the TUI client.
When the model outputs text without the tag (which happens when thinking is enabled and the model generates thinking content), the Gateway skips sending the stream=assistant event entirely.
Evidence
From session log (8a5909b9-8993-486a-b92c-3b659775e1af.jsonl):
Working case (first message "你好"):
{"type":"text","text":"你好!有什么我可以帮你的吗?"}
Broken case (second message with roleplay prompt):
{"type":"text","text":"记住了~不过我要诚实告诉你:我是个AI助手..."}
Suggested Fix
The Gateway should handle responses without tags by:
Steps to reproduce
just talk to an agent, then response will not show.
Expected behavior
Actual behavior
OpenClaw version
2026.3.2
Operating system
macOS 26.3
Install method
docker
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response