-
-
Notifications
You must be signed in to change notification settings - Fork 54.8k
Closed
Labels
bugSomething isn't workingSomething isn't workingbug:behaviorIncorrect behavior without a crashIncorrect behavior without a crash
Description
Bug type
Behavior bug (incorrect output/state without crash)
Summary
When an "LLM Request timed out" error occurs while using Model A, switching to Model B does not update the model used by the heartbeat task. The heartbeat continues to use Model A instead of the newly selected Model B.
Steps to reproduce
- Set the model to "gpt 5.2": openclaw models set openai/gpt-5.2.
- Perform tasks using gpt-5.2 until an "LLM Request timed out" error occurs.
- Switch the model to "minimax-portal/MiniMax-M2.5": openclaw models set minimax-portal/MiniMax-M2.5.
- Continue working; the system correctly uses MiniMax and functions normally.
- Wait for the automatic Heartbeat to trigger. Check the logs.
Expected behavior
The Heartbeat task should use "MiniMax-M2.5" to complete its work.
Actual behavior
The Heartbeat task still uses "gpt-5.2," resulting in another "LLM Request timed out" error.
OpenClaw version
2026.2.26
Operating system
macOS 26.3
Install method
npm global
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingbug:behaviorIncorrect behavior without a crashIncorrect behavior without a crash