Summary
When using Gemini models with Telegram DM draft streaming, raw reasoning text can leak into user-visible answer output.
Steps to Reproduce
- Configure OpenClaw with a Gemini model (e.g.
gemini-3-flash-preview) and Telegram DM channel
- Enable streaming with reasoning level display
- Send a prompt that triggers reasoning output
- During streaming, a partial chunk containing just
"Reasoning:\n" (before actual reasoning content arrives) passes through splitTelegramReasoningText()
Expected
The bare Reasoning: prefix should be suppressed (return {}) so it doesn't reach the user.
Actual
trim() collapses "Reasoning:\n" to "Reasoning:", which doesn't match REASONING_MESSAGE_PREFIX ("Reasoning:\n"), falls through all checks, and is emitted as { answerText: "Reasoning:" } — leaking the reasoning prefix into the visible answer.
Environment
- OpenClaw 2026.3.3
- Gemini 3 Flash / Gemini 3.1 Pro
- Telegram DM with draft streaming
Summary
When using Gemini models with Telegram DM draft streaming, raw reasoning text can leak into user-visible answer output.
Steps to Reproduce
gemini-3-flash-preview) and Telegram DM channel"Reasoning:\n"(before actual reasoning content arrives) passes throughsplitTelegramReasoningText()Expected
The bare
Reasoning:prefix should be suppressed (return{}) so it doesn't reach the user.Actual
trim()collapses"Reasoning:\n"to"Reasoning:", which doesn't matchREASONING_MESSAGE_PREFIX("Reasoning:\n"), falls through all checks, and is emitted as{ answerText: "Reasoning:" }— leaking the reasoning prefix into the visible answer.Environment