Bug type
Regression / delivery-state bug on macOS (not Windows-only)
Summary
On OpenClaw 2026.4.29 (a448042) running on macOS, Telegram can enter a polling/transport stall while agent runs continue to complete and write assistant final replies to the local session transcript. The Web UI shows the replies, but Telegram does not receive the corresponding message. With channels.telegram.streaming.mode = "partial", the streaming/preview path appears to treat some ambiguous preview/send states as delivered/retained, so no fallback sendMessage is emitted even though no final message lands in Telegram.
In the same incident window, model fallback / Web UI sessions.patch also left persistent session-level model overrides, so temporary fallback models became visible as changed session models.
This happened on macOS with multiple Telegram bot accounts, so it is not only the Windows polling-stall reports.
Environment
- OpenClaw:
2026.4.29 (a448042)
- Install: global npm package
- OS: macOS
26.4.1 (25E253)
- Kernel: Darwin
25.4.0 arm64
- Machine: Mac Studio
Mac16,9, Apple M4 Max, 128 GB RAM
- Node.js:
v22.22.0
- Gateway plugins active included Telegram; Telegram had multiple accounts configured
- Telegram delivery mode before mitigation:
channels.telegram.streaming.mode = "partial"
- every configured Telegram account also had
streaming.mode = "partial"
- Affected surfaces:
- Telegram DM account
- Telegram group bot account
- Web UI session viewer
Related issues
Possibly related, but none cover the full combined failure observed here:
Timeline / evidence
Times below are local Asia/Hong_Kong time from gateway logs.
1. Telegram polling transport stalled
2026-05-01T14:14:14.789+08:00 [telegram] Polling stall detected (no completed getUpdates for 170.09s); forcing restart. [diag inFlight=0 outcome=ok startedAt=1777615858399 finishedAt=1777615884694 durationMs=26295 offset=618569405]
2026-05-01T14:14:17.685+08:00 [telegram] polling runner stopped (polling stall detected); restarting in 2.22s.
2026-05-01T14:14:17.686+08:00 [telegram] answerCallbackQuery failed: Network request for 'answerCallbackQuery' failed!
Telegram providers only show startup again later:
2026-05-01T14:17:57.876+08:00 [telegram] [uubot] starting provider
2. Group bot generated a final assistant reply, Web UI showed it, but Telegram had no sendMessage
A Telegram group bot run completed successfully and the local session transcript contains the assistant final reply:
收到,审批流只看按钮点击记录,文本消息不会替代按钮操作。请非发起人成员在群里点击审批请求 #103 上的 ✅ 同意按钮完成审批。
The trajectory for that run was successful:
2026-05-01T06:15:10.895Z session.started trigger=user model=qwen3.6:35b
2026-05-01T06:15:23.278Z model.completed timedOut=false
2026-05-01T06:15:23.280Z trace.artifacts finalStatus=success timedOut=false
2026-05-01T06:15:23.280Z session.ended status=success timedOut=false
However, gateway logs around 14:15 contain no corresponding:
[telegram] sendMessage ok chat=<group> ...
The final reply was visible in Web UI because it was written to the local transcript, but it was not delivered to Telegram.
3. DM run had a successful Telegram send shortly after, showing Telegram delivery can recover independently
2026-05-01T14:22:47.151+08:00 [telegram] sendMessage ok chat=<redacted-dm> message=9488
This supports the narrower failure shape: Web UI transcript writes can succeed while Telegram outbound delivery for specific turns does not land during/near transport stall recovery.
4. Silent reply fallback produced confusing visible chatter in DM
The DM also showed synthetic no-op text such as:
Nothing to add right now.
This matches the behavior described in #70628: Telegram DM turns with no visible final response synthesize NO_REPLY, then direct silent-reply defaults rewrite it into visible filler text.
5. Temporary model fallback became persistent session state
During the same window:
2026-05-01T14:22:10.462+08:00 [agent/embedded] embedded run failover decision: runId=74e2d196-3a4b-4f6d-a833-a9ea70329012 stage=assistant decision=fallback_model reason=timeout from=openai-codex/gpt-5.5 profile=sha256:9c018ec112cf
2026-05-01T14:23:16.417+08:00 [ws] ⇄ res ✓ sessions.patch 113ms conn=6b463e19…5bdf id=6b8d399f…6fd9
2026-05-01T14:24:07.998+08:00 [plugins] active-memory: agent=main session=agent:main:telegram:direct:<redacted> activeProvider=openai-codex activeModel=gpt-5.3-codex start timeoutMs=15000 queryChars=1568
2026-05-01T14:25:13.840+08:00 [ws] ⇄ res ✓ sessions.patch 177ms conn=6b463e19…5bdf id=a0a98941…3bdd
Before mitigation, session store contained persistent overrides that did not belong in long-lived state:
{
"agent:main:main": {
"model": "gpt-5.3-codex",
"modelOverride": "gpt-5.3-codex"
},
"agent:uubot:telegram:group:<redacted>": {
"modelOverride": "gemma4:31b"
}
}
The gateway config default model was still openai-codex/gpt-5.5; only session state was polluted.
Expected behavior
- If the Telegram transport stalls, agent final replies should not be silently marked delivered unless a final Telegram
sendMessage/successful preview edit actually landed.
- Web UI transcript success should not be treated as Telegram delivery success.
- With
streaming.mode="partial", the preview/edit path should either:
- reliably finalize a visible Telegram message, or
- fall back to standard
sendMessage, or
- surface a delivery failure visibly/log it as undelivered.
- Temporary model fallback should not persist into session-level
modelOverride unless the user explicitly changes the model.
NO_REPLY should not be rewritten into visible filler text in Telegram DMs when the real intent is silence.
Actual behavior
- Agent runs completed and final assistant messages appeared in Web UI.
- Telegram did not receive the corresponding final reply for at least one group bot turn.
- No
sendMessage ok line exists for the missing final group reply.
- DM silent/no-response behavior produced visible filler text like
Nothing to add right now.
- Session model overrides were polluted by fallback / Web UI session patch state.
Workaround applied locally
The following mitigations were applied and validated locally:
{
"channels.telegram.streaming.mode": "off",
"channels.telegram.accounts.*.streaming.mode": "off",
"agents.defaults.silentReplyRewrite.direct": false
}
Also cleared polluted session fields:
- removed
model=gpt-5.3-codex / modelOverride=gpt-5.3-codex from the Web UI main session
- removed
modelOverride=gemma4:31b from the affected Telegram group bot session
openclaw config validate passed after the workaround.
Why this looks like a combined bug
This incident appears to combine three failure modes:
- Telegram long-polling/transport stall during normal operation on macOS.
- Telegram partial streaming / preview delivery state can produce a false delivered/retained state when transport status is ambiguous, leaving Web UI with the final transcript but Telegram without the final visible message.
- Fallback/model-selection state can leak into persistent session overrides.
The first and third symptoms amplify the second: the user sees the Web UI answer and a changed session model, but Telegram never receives the expected reply.
Bug type
Regression / delivery-state bug on macOS (not Windows-only)
Summary
On OpenClaw
2026.4.29 (a448042)running on macOS, Telegram can enter a polling/transport stall while agent runs continue to complete and write assistant final replies to the local session transcript. The Web UI shows the replies, but Telegram does not receive the corresponding message. Withchannels.telegram.streaming.mode = "partial", the streaming/preview path appears to treat some ambiguous preview/send states as delivered/retained, so no fallbacksendMessageis emitted even though no final message lands in Telegram.In the same incident window, model fallback / Web UI
sessions.patchalso left persistent session-level model overrides, so temporary fallback models became visible as changed session models.This happened on macOS with multiple Telegram bot accounts, so it is not only the Windows polling-stall reports.
Environment
2026.4.29 (a448042)26.4.1(25E253)25.4.0arm64Mac16,9, Apple M4 Max, 128 GB RAMv22.22.0channels.telegram.streaming.mode = "partial"streaming.mode = "partial"Related issues
Possibly related, but none cover the full combined failure observed here:
message_sendinghooksstreaming.mode="partial"interaction with replyToModeTimeline / evidence
Times below are local
Asia/Hong_Kongtime from gateway logs.1. Telegram polling transport stalled
Telegram providers only show startup again later:
2. Group bot generated a final assistant reply, Web UI showed it, but Telegram had no sendMessage
A Telegram group bot run completed successfully and the local session transcript contains the assistant final reply:
The trajectory for that run was successful:
However, gateway logs around
14:15contain no corresponding:The final reply was visible in Web UI because it was written to the local transcript, but it was not delivered to Telegram.
3. DM run had a successful Telegram send shortly after, showing Telegram delivery can recover independently
This supports the narrower failure shape: Web UI transcript writes can succeed while Telegram outbound delivery for specific turns does not land during/near transport stall recovery.
4. Silent reply fallback produced confusing visible chatter in DM
The DM also showed synthetic no-op text such as:
This matches the behavior described in #70628: Telegram DM turns with no visible final response synthesize
NO_REPLY, then direct silent-reply defaults rewrite it into visible filler text.5. Temporary model fallback became persistent session state
During the same window:
Before mitigation, session store contained persistent overrides that did not belong in long-lived state:
{ "agent:main:main": { "model": "gpt-5.3-codex", "modelOverride": "gpt-5.3-codex" }, "agent:uubot:telegram:group:<redacted>": { "modelOverride": "gemma4:31b" } }The gateway config default model was still
openai-codex/gpt-5.5; only session state was polluted.Expected behavior
sendMessage/successful preview edit actually landed.streaming.mode="partial", the preview/edit path should either:sendMessage, ormodelOverrideunless the user explicitly changes the model.NO_REPLYshould not be rewritten into visible filler text in Telegram DMs when the real intent is silence.Actual behavior
sendMessage okline exists for the missing final group reply.Nothing to add right now.Workaround applied locally
The following mitigations were applied and validated locally:
{ "channels.telegram.streaming.mode": "off", "channels.telegram.accounts.*.streaming.mode": "off", "agents.defaults.silentReplyRewrite.direct": false }Also cleared polluted session fields:
model=gpt-5.3-codex/modelOverride=gpt-5.3-codexfrom the Web UI main sessionmodelOverride=gemma4:31bfrom the affected Telegram group bot sessionopenclaw config validatepassed after the workaround.Why this looks like a combined bug
This incident appears to combine three failure modes:
The first and third symptoms amplify the second: the user sees the Web UI answer and a changed session model, but Telegram never receives the expected reply.