feat(ollama): optimize local LLM support with auto-discovery and timeouts#7278
feat(ollama): optimize local LLM support with auto-discovery and timeouts#7278alltomatos wants to merge 3 commits intoopenclaw:mainfrom
Conversation
Additional Comments (2)
Prompt To Fix With AIThis is a comment left during a code review.
Path: src/agents/models-config.providers.ts
Line: 107:110
Comment:
[P1] Auto-discovery is intended to be silent unless explicitly configured, but the “no models found” path always logs `console.warn("No Ollama models found on local instance")` (`src/agents/models-config.providers.ts:107-109`). This still produces console noise for users without Ollama running/loaded (and contradicts the silent-failure behavior in the catch block). Consider making this warning conditional on explicit configuration too.
How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix With AIThis is a comment left during a code review.
Path: docs/providers/ollama.md
Line: 176:195
Comment:
[P1] Docs are now inconsistent with the implementation: troubleshooting still says you must set `OLLAMA_API_KEY` and that only tool-capable models are auto-discovered (`docs/providers/ollama.md:178-195`), but the code now auto-discovers without a key and does not filter by tool support. This will send users down the wrong path when Ollama isn’t detected or when models don’t show up.
How can I resolve this? If you propose a fix, please make it concise. |
b652f30 to
5d1456c
Compare
bfc1ccb to
f92900f
Compare
|
Superseded by #29201 for the minimal mergeable Ollama autodiscovery fix path. Thanks for the groundwork in this PR. |
Summary
Optimizes the Ollama provider integration to better support local LLM workflows without requiring manual configuration or API keys.
Changes
11434) without requiringOLLAMA_API_KEY.OLLAMA_HOSTandOLLAMA_BASE_URLenvironment variables to override defaults.Testing
OLLAMA_HOSTenvironment variable overrides.Checklist
Greptile Overview
Greptile Summary
This PR improves the Ollama provider experience by enabling implicit provider auto-discovery against a local Ollama instance (defaulting to
127.0.0.1:11434), adding env overrides (OLLAMA_HOST,OLLAMA_BASE_URL), increasing discovery timeouts, and avoiding duplicate discovery calls when building the provider config. It also adds unit tests for the new discovery behavior and updates docs and docker examples to better support local-model workflows.Key interactions:
src/agents/models-config.providers.tsnow probes Ollama duringresolveImplicitProviders()and conditionally adds theollamaprovider when either an API key is present or models are discovered; docs and docker config changes aim to make local Ollama use easier out of the box.Confidence Score: 3/5
(2/5) Greptile learns from your feedback when you react with thumbs up/down!