-
-
Notifications
You must be signed in to change notification settings - Fork 54.3k
Description
Bug type
Behavior bug (incorrect output/state without crash)
Summary
Setup:
Mac with Docker
OpenClaw running in Docker container
Ollama running on Mac via brew services
Model: llama3.2:3b
Problem:
When a message is sent via Telegram, the embedded agent fails immediately with error=fetch failed in under 10ms.
What works:
curl http://host.docker.internal:11434/api/tags from inside container ✅
node -e "fetch('http://192.168.65.254:11434/api/chat'...)" from inside container ✅
Ollama responds correctly to all direct requests ✅
What doesn't work:
OpenClaw embedded agent always returns fetch failed immediately
Config:
json"models": {
"providers": {
"ollama": {
"baseUrl": "http://192.168.65.254:11434",
"apiKey": "ollama-local",
"api": "ollama"
}
}
}
Steps to reproduce
Mac M4, Docker, Ollama via brew services, llama3.2:3b
Expected behavior
(the AI agent) responds to messages sent via Telegram using the local Ollama model
Actual behavior
Embedded agent fails immediately with "fetch failed" error in under 10ms. No response is sent.
OpenClaw version
local (built from source via Docker)
Operating system
macOS, Apple M4, Docker Desktop
Install method
No response
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response