Skip to content

Gemini raw reasoning text leaks into answer during Telegram DM streaming #37890

@okuyam2y

Description

@okuyam2y

Summary

When using Gemini models with Telegram DM draft streaming, raw reasoning text can leak into user-visible answer output.

Steps to Reproduce

  1. Configure OpenClaw with a Gemini model (e.g. gemini-3-flash-preview) and Telegram DM channel
  2. Enable streaming with reasoning level display
  3. Send a prompt that triggers reasoning output
  4. During streaming, a partial chunk containing just "Reasoning:\n" (before actual reasoning content arrives) passes through splitTelegramReasoningText()

Expected

The bare Reasoning: prefix should be suppressed (return {}) so it doesn't reach the user.

Actual

trim() collapses "Reasoning:\n" to "Reasoning:", which doesn't match REASONING_MESSAGE_PREFIX ("Reasoning:\n"), falls through all checks, and is emitted as { answerText: "Reasoning:" } — leaking the reasoning prefix into the visible answer.

Environment

  • OpenClaw 2026.3.3
  • Gemini 3 Flash / Gemini 3.1 Pro
  • Telegram DM with draft streaming

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions