Feature request: Add Ollama as a local model provider in QuickStart
Context
I’m trying to set up OpenClaw locally on a Linux notebook.
The current QuickStart flow only allows selecting remote providers (OpenAI, Anthropic, etc.), which blocks a local-first setup.
Problem
- No option to select a local model provider
- No Ollama integration in the onboarding wizard
- Difficult or unclear local setup on Linux
As a result, I haven’t been able to complete a fully local installation using the wizard.
Requested feature
Add Ollama as a first-class model provider in the QuickStart flow.
Suggested behavior
- Allow selecting
ollama as a provider during onboarding
- Automatically detect a running Ollama instance at
localhost:11434 (if available)
- Provide basic local model selection (e.g. llama3, mistral, codellama)
Expected benefit
- Improved Linux support
- Fully local, offline-capable setup
- Better alignment with a local-first workflow
Thanks for the great project 🚀
Feature request: Add Ollama as a local model provider in QuickStart
Context
I’m trying to set up OpenClaw locally on a Linux notebook.
The current QuickStart flow only allows selecting remote providers (OpenAI, Anthropic, etc.), which blocks a local-first setup.
Problem
As a result, I haven’t been able to complete a fully local installation using the wizard.
Requested feature
Add Ollama as a first-class model provider in the QuickStart flow.
Suggested behavior
ollamaas a provider during onboardinglocalhost:11434(if available)Expected benefit
Thanks for the great project 🚀