Description
When the primary LLM API hits a rate limit, OpenClaw retries with fallback models. However, each retry attempt sends a new partial/streaming message to Telegram, causing multiple duplicate/partial messages to be spammed to the user.
Steps to Reproduce
- Configure OpenClaw with Telegram channel enabled
- Configure multiple models with fallback chain (e.g., Gemini 3.0 Flash as primary, trinity-large-preview as fallback)
- Trigger an API rate limit on the primary model (e.g., by sending multiple requests quickly)
- Observe Telegram receiving multiple partial/duplicate messages
Expected Behavior
- On API rate limit, either:
- Do not send any message until a complete response is available
- Send a single error message explaining the rate limit
- Retry silently without sending partial messages to Telegram
Actual Behavior
- Each retry/fallback attempt sends a new partial message to Telegram
- User receives multiple broken/incomplete messages
- Session can get stuck with
abortedLastRun: true and pending messages
Environment
- OpenClaw version:
2026.2.26
- Node version: 22
- OS: Debian 12 (bookworm)
- Channel: Telegram
Relevant Logs
[telegram] sendMessage ok chat=101331937 message=1581
[telegram] sendMessage ok chat=101331937 message=1583
[diagnostic] lane task error: lane=session:agent:main:main durationMs=21963 error="FailoverError: ⚠️ API rate limit reached. Please try again later."
Configuration (redacted)
{
"messaging": {
"telegram": {
"enabled": true,
"streaming": "partial"
}
},
"models": {
"fallback": {
"chain": ["google/gemini-3-flash-preview", "openrouter/arcee-ai/trinity-large-preview:free"]
}
}
}
Additional Context
Suggested Fix
- Buffer partial messages until response is complete before sending to Telegram
- Implement debounce/throttle for Telegram messages during fallback chain
- Add a flag to disable streaming output specifically for messaging channels
- Clear pending messages on session abort/reset
Description
When the primary LLM API hits a rate limit, OpenClaw retries with fallback models. However, each retry attempt sends a new partial/streaming message to Telegram, causing multiple duplicate/partial messages to be spammed to the user.
Steps to Reproduce
Expected Behavior
Actual Behavior
abortedLastRun: trueand pending messagesEnvironment
2026.2.26Relevant Logs
Configuration (redacted)
{ "messaging": { "telegram": { "enabled": true, "streaming": "partial" } }, "models": { "fallback": { "chain": ["google/gemini-3-flash-preview", "openrouter/arcee-ai/trinity-large-preview:free"] } } }Additional Context
Suggested Fix