You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After upgrading to 2026.2.22-2, the @openclaw/msteams plugin delivers raw reasoning/thinking tokens to the Teams channel instead of (or mixed with) the final answer when reasoning is enabled on the model. The WebChat UI shows the correct final response, but Teams receives the internal thinking stream.
This was triggered by PR #22513, which changed the default reasoningLevel to "on" for models with reasoning: true. The MS Teams plugin does not separate reasoning stream lanes from the answer stream, unlike the fixes applied to Telegram in 2026.2.21 (#20774). A parallel bug exists for Telegram (#24376) and a feature request for Slack (#19411), but no equivalent issue exists for the @openclaw/msteams plugin.
Workaround:/reasoning off restores correct behavior, but users should be able to use reasoning without it leaking into the channel output.
Steps to reproduce
Install @openclaw/msteams plugin and configure MS Teams channel.
Use a model with reasoning: true (e.g., Codex 5.3 via OpenAI OAuth).
Observe that Teams receives the reasoning/thinking tokens (e.g., internal chain-of-thought text) instead of the final answer.
Compare with WebChat UI — WebChat shows the correct final response while Teams gets the thinking stream.
Expected behavior
When reasoning is enabled, the MS Teams plugin should:
Process reasoning/thinking tokens internally (or optionally display them in a collapsed/separate format).
Deliver only the final answer text to the Teams channel — the same content visible in the WebChat UI.
This is consistent with the Telegram streaming fix in 2026.2.21 (#20774), which split reasoning and answer draft preview lanes to prevent cross-lane overwrites.
Actual behavior
Teams receives the raw reasoning/thinking tokens (internal chain-of-thought) instead of the final answer. The response appears as the model's internal reasoning process (e.g., "User is asking about X, I should respond with...") rather than the actual intended reply.
The WebChat UI correctly shows only the final answer, confirming the issue is in the MS Teams plugin's outbound message rendering — not in the agent/model layer.
/reasoning off restores normal behavior, confirming the reasoning stream is being routed to the channel output instead of being filtered.
OpenClaw version
2026.2.22-2
Operating system
macOS 15.7.3 (arm64), Node 22.22
Install method
npm global
Logs, screenshots, and evidence
No logs captured yet — will add gateway log snippets and Teams message screenshots if needed.
The behavior is easily reproducible: any message sent via Teams with reasoning enabled returns thinking tokens. `/reasoning off` immediately fixes the output.
Impact and severity
Affected: All MS Teams users with reasoning-capable models after upgrading to any version including #22513.
Severity: High — effectively breaks the Teams channel for reasoning-enabled models (responses are unusable).
Frequency: 100% repro when reasoning is enabled.
Consequence: Teams channel delivers internal thinking tokens instead of answers, making the bot unusable without the /reasoning off workaround.
Additional information
Model/provider: Codex 5.3 via OpenAI OAuth (ChatGPT subscription) Integration/channel: Microsoft Teams (@openclaw/msteams plugin)
Suggested fix direction:
The @openclaw/msteams plugin likely needs the same reasoning/answer stream lane separation that was applied to Telegram in #20774. The handleMessageEnd path (or equivalent in the Teams plugin) should filter out reasoning blocks before emitting to the Teams channel, similar to how WebChat already handles this correctly.
Temporary workaround:/reasoning off in Teams chat.
Summary
After upgrading to 2026.2.22-2, the
@openclaw/msteamsplugin delivers raw reasoning/thinking tokens to the Teams channel instead of (or mixed with) the final answer whenreasoningis enabled on the model. The WebChat UI shows the correct final response, but Teams receives the internal thinking stream.This was triggered by PR #22513, which changed the default
reasoningLevelto"on"for models withreasoning: true. The MS Teams plugin does not separate reasoning stream lanes from the answer stream, unlike the fixes applied to Telegram in 2026.2.21 (#20774). A parallel bug exists for Telegram (#24376) and a feature request for Slack (#19411), but no equivalent issue exists for the@openclaw/msteamsplugin.Workaround:
/reasoning offrestores correct behavior, but users should be able to use reasoning without it leaking into the channel output.Steps to reproduce
@openclaw/msteamsplugin and configure MS Teams channel.reasoning: true(e.g., Codex 5.3 via OpenAI OAuth).Expected behavior
When
reasoningis enabled, the MS Teams plugin should:This is consistent with the Telegram streaming fix in 2026.2.21 (#20774), which split reasoning and answer draft preview lanes to prevent cross-lane overwrites.
Actual behavior
Teams receives the raw reasoning/thinking tokens (internal chain-of-thought) instead of the final answer. The response appears as the model's internal reasoning process (e.g., "User is asking about X, I should respond with...") rather than the actual intended reply.
The WebChat UI correctly shows only the final answer, confirming the issue is in the MS Teams plugin's outbound message rendering — not in the agent/model layer.
/reasoning offrestores normal behavior, confirming the reasoning stream is being routed to the channel output instead of being filtered.OpenClaw version
2026.2.22-2
Operating system
macOS 15.7.3 (arm64), Node 22.22
Install method
npm global
Logs, screenshots, and evidence
Impact and severity
Affected: All MS Teams users with reasoning-capable models after upgrading to any version including #22513.
Severity: High — effectively breaks the Teams channel for reasoning-enabled models (responses are unusable).
Frequency: 100% repro when reasoning is enabled.
Consequence: Teams channel delivers internal thinking tokens instead of answers, making the bot unusable without the
/reasoning offworkaround.Additional information
Model/provider: Codex 5.3 via OpenAI OAuth (ChatGPT subscription)
Integration/channel: Microsoft Teams (
@openclaw/msteamsplugin)Related issues:
"on"for reasoning-capable modelsSuggested fix direction:
The
@openclaw/msteamsplugin likely needs the same reasoning/answer stream lane separation that was applied to Telegram in #20774. ThehandleMessageEndpath (or equivalent in the Teams plugin) should filter out reasoning blocks before emitting to the Teams channel, similar to how WebChat already handles this correctly.Temporary workaround:
/reasoning offin Teams chat.