Bug
When using Ollama models with native thinking support (e.g. qwen3.5:9b), OpenClaw does not send think: false in the Ollama API request even when thinking: "off" is configured.
Without think: false, the model puts its response in the thinking field instead of content. OpenClaw drops thinking blocks → user sees empty response.
Version
- OpenClaw: 2026.3.13
- Ollama: latest (March 2026)
- Model: qwen3.5:9b (any thinking-capable model)
Repro
- Set
thinking: "off" in model params
- Send message that triggers tool call
- Tool returns result
- Model response has
content: "" and thinking: "actual answer"
- User sees no response
Verified
# Without think:false → empty content
curl ollama { stream:true } → content:"" thinking:"answer"
# With think:false → correct
curl ollama { stream:true, think:false } → content:"answer" thinking:null
Fix
In createOllamaStreamFn(), add think to request body based on thinkingLevel:
const ollamaThink = options?.thinkingLevel && options.thinkingLevel !== "off";
// body.think = !!ollamaThink
Bug
When using Ollama models with native thinking support (e.g. qwen3.5:9b), OpenClaw does not send
think: falsein the Ollama API request even whenthinking: "off"is configured.Without
think: false, the model puts its response in thethinkingfield instead ofcontent. OpenClaw drops thinking blocks → user sees empty response.Version
Repro
thinking: "off"in model paramscontent: ""andthinking: "actual answer"Verified
Fix
In
createOllamaStreamFn(), addthinkto request body based on thinkingLevel: