Environment
- OpenClaw versions tested (all in same VM):
2026.5.3-1, 2026.5.4, 2026.5.6
- Node v22.22.2
- Ubuntu 24.04 LTS, e2-medium GCP VM (2 vCPU, 4GB), us-central1
- Channel:
@openclaw/msteams (Bot Framework push, Tailscale Funnel)
- Provider/model:
google/gemini-3-flash-preview
- Workspace size: trivial (~5KB SOUL.md, 3 chunks in active-memory store)
- Plugin loadout (after
plugins.allow whitelist): google, msteams, active-memory, browser, web-readability, tavily
Symptom
On 2026.5.4 and 2026.5.6, every msteams DM goes through this path:
17:40:28 received message
17:40:29 dispatching to agent
17:42:28 stalled session: ... state=processing age=120s ... lastProgress=model_call:started
17:42:35 [llm-idle-timeout] google/gemini-3-flash-preview produced no reply before the idle watchdog; retrying same model
17:44:35 idle-timeout fires again → user sees "The model did not produce a response before the model idle timeout"
model.completed event recorded for the first attempt:
{
"type": "model.completed",
"data": {
"aborted": true, "timedOut": true, "idleTimedOut": true,
"promptError": "LLM idle timeout (240s): no response from model | LLM idle timeout (240s): no response from model",
"promptErrorSource": "prompt",
"assistantTexts": []
}
}
The model never emits a single token from the gateway's perspective.
Direct curl to https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent from the same VM, same API key, same prompt: HTTP 200 in 1–3.5s. Network and API are fine.
What we tried (none fixed it on 5.4/5.6)
thinkingDefault: low → unchanged
models.providers.google.timeoutSeconds: 240 (with required baseUrl+models[]) → idle-timeout simply fires later at 240s
plugins.allow whitelist (69 plugins → 7) → unchanged
openclaw doctor --fix (24 unused skills disabled) → unchanged
NODE_COMPILE_CACHE + OPENCLAW_NO_RESPAWN=1 env → unchanged
- Hard
/reset of the affected user's session (trajectory archived, sessionKey cleared from sessions.json) → fresh sessionId STILL stalls on first turn
- Stop+start gateway (full, not graceful) several times → unchanged
- Confirmed Gemini API direct call works in 1–3.5s from same VM/key
What fixed it
Downgrade to 2026.5.3-1. Same config, same workspace, same plugins. Turn time dropped from 240s+ idle-timeout to ~39s end-to-end with dispatch complete.
Notes
Repro
- Provision Linux VM with msteams channel + Bot Framework + Funnel.
- Install
openclaw@2026.5.4 or later, configure channels.msteams.enabled: true, set MSTEAMS_APP_ID/PASSWORD/TENANT_ID.
- Send a DM from a paired Teams user.
- Observe: model.completed with
idleTimedOut:true, assistantTexts:[] after ~120s/240s; user sees the standard idle-timeout error.
npm install -g openclaw@2026.5.3-1 and restart gateway → next DM responds in ~30–40s end-to-end.
Happy to share full trajectory dumps and gateway logs (sanitized) on request.
Environment
2026.5.3-1,2026.5.4,2026.5.6@openclaw/msteams(Bot Framework push, Tailscale Funnel)google/gemini-3-flash-previewplugins.allowwhitelist):google, msteams, active-memory, browser, web-readability, tavilySymptom
On
2026.5.4and2026.5.6, every msteams DM goes through this path:model.completedevent recorded for the first attempt:{ "type": "model.completed", "data": { "aborted": true, "timedOut": true, "idleTimedOut": true, "promptError": "LLM idle timeout (240s): no response from model | LLM idle timeout (240s): no response from model", "promptErrorSource": "prompt", "assistantTexts": [] } }The model never emits a single token from the gateway's perspective.
Direct curl to
https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContentfrom the same VM, same API key, same prompt: HTTP 200 in 1–3.5s. Network and API are fine.What we tried (none fixed it on 5.4/5.6)
thinkingDefault: low→ unchangedmodels.providers.google.timeoutSeconds: 240(with requiredbaseUrl+models[]) → idle-timeout simply fires later at 240splugins.allowwhitelist (69 plugins → 7) → unchangedopenclaw doctor --fix(24 unused skills disabled) → unchangedNODE_COMPILE_CACHE+OPENCLAW_NO_RESPAWN=1env → unchanged/resetof the affected user's session (trajectory archived, sessionKey cleared fromsessions.json) → fresh sessionId STILL stalls on first turnWhat fixed it
Downgrade to
2026.5.3-1. Same config, same workspace, same plugins. Turn time dropped from 240s+ idle-timeout to ~39s end-to-end withdispatch complete.Notes
googlechatchannel, OpenClaw2026.4.23) is unaffected — issue appears specific to the msteams + 5.4+ combination.Repro
openclaw@2026.5.4or later, configurechannels.msteams.enabled: true, setMSTEAMS_APP_ID/PASSWORD/TENANT_ID.idleTimedOut:true, assistantTexts:[]after ~120s/240s; user sees the standard idle-timeout error.npm install -g openclaw@2026.5.3-1and restart gateway → next DM responds in ~30–40s end-to-end.Happy to share full trajectory dumps and gateway logs (sanitized) on request.