Skip to content

feat(ollama): optimize local LLM support with auto-discovery and timeouts#7278

Closed
alltomatos wants to merge 3 commits intoopenclaw:mainfrom
alltomatos:feat/ollama-local-support
Closed

feat(ollama): optimize local LLM support with auto-discovery and timeouts#7278
alltomatos wants to merge 3 commits intoopenclaw:mainfrom
alltomatos:feat/ollama-local-support

Conversation

@alltomatos
Copy link

@alltomatos alltomatos commented Feb 2, 2026

Summary

Optimizes the Ollama provider integration to better support local LLM workflows without requiring manual configuration or API keys.

Changes

  • Auto-discovery: Automatically detects local Ollama instances running on default ports (11434) without requiring OLLAMA_API_KEY.
  • Performance: Eliminated double network calls during model discovery.
  • Reliability: Increased discovery timeout from 5s to 10s to accommodate slower local model loading.
  • Configuration: Added support for OLLAMA_HOST and OLLAMA_BASE_URL environment variables to override defaults.
  • UX: Added silent failure mode for unconfigured instances to reduce console noise.

Testing

  • Added unit tests for auto-discovery logic (with and without API keys).
  • Validated OLLAMA_HOST environment variable overrides.
  • Verified that existing tests pass.

Checklist

  • Code matches the existing style.
  • Tests added/updated.
  • Documentation updated.

Greptile Overview

Greptile Summary

This PR improves the Ollama provider experience by enabling implicit provider auto-discovery against a local Ollama instance (defaulting to 127.0.0.1:11434), adding env overrides (OLLAMA_HOST, OLLAMA_BASE_URL), increasing discovery timeouts, and avoiding duplicate discovery calls when building the provider config. It also adds unit tests for the new discovery behavior and updates docs and docker examples to better support local-model workflows.

Key interactions: src/agents/models-config.providers.ts now probes Ollama during resolveImplicitProviders() and conditionally adds the ollama provider when either an API key is present or models are discovered; docs and docker config changes aim to make local Ollama use easier out of the box.

Confidence Score: 3/5

  • This PR is likely safe to merge, but it has a few correctness/documentation mismatches and a misleading runtime version message that should be fixed first.
  • Core Ollama changes are straightforward and covered by new tests, but there are a couple of user-facing correctness issues (Node version guard message mismatch, docs contradicting new behavior) and some brittleness/noise in auto-discovery and tests that could lead to confusing behavior or flaky CI.
  • src/infra/runtime-guard.ts, src/agents/models-config.providers.ts, docs/providers/ollama.md, src/agents/models-config.providers.ollama.test.ts

(2/5) Greptile learns from your feedback when you react with thumbs up/down!

@openclaw-barnacle openclaw-barnacle bot added docs Improvements or additions to documentation docker Docker and sandbox tooling agents Agent runtime and tooling labels Feb 2, 2026
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 7 comments

Edit Code Review Agent Settings | Greptile

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 2, 2026

Additional Comments (2)

src/agents/models-config.providers.ts
[P1] Auto-discovery is intended to be silent unless explicitly configured, but the “no models found” path always logs console.warn("No Ollama models found on local instance") (src/agents/models-config.providers.ts:107-109). This still produces console noise for users without Ollama running/loaded (and contradicts the silent-failure behavior in the catch block). Consider making this warning conditional on explicit configuration too.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/models-config.providers.ts
Line: 107:110

Comment:
[P1] Auto-discovery is intended to be silent unless explicitly configured, but the “no models found” path always logs `console.warn("No Ollama models found on local instance")` (`src/agents/models-config.providers.ts:107-109`). This still produces console noise for users without Ollama running/loaded (and contradicts the silent-failure behavior in the catch block). Consider making this warning conditional on explicit configuration too.

How can I resolve this? If you propose a fix, please make it concise.

docs/providers/ollama.md
[P1] Docs are now inconsistent with the implementation: troubleshooting still says you must set OLLAMA_API_KEY and that only tool-capable models are auto-discovered (docs/providers/ollama.md:178-195), but the code now auto-discovers without a key and does not filter by tool support. This will send users down the wrong path when Ollama isn’t detected or when models don’t show up.

Prompt To Fix With AI
This is a comment left during a code review.
Path: docs/providers/ollama.md
Line: 176:195

Comment:
[P1] Docs are now inconsistent with the implementation: troubleshooting still says you must set `OLLAMA_API_KEY` and that only tool-capable models are auto-discovered (`docs/providers/ollama.md:178-195`), but the code now auto-discovers without a key and does not filter by tool support. This will send users down the wrong path when Ollama isn’t detected or when models don’t show up.

How can I resolve this? If you propose a fix, please make it concise.

@alltomatos alltomatos force-pushed the feat/ollama-local-support branch from b652f30 to 5d1456c Compare February 2, 2026 17:17
@vincentkoc
Copy link
Member

Superseded by #29201 for the minimal mergeable Ollama autodiscovery fix path. Thanks for the groundwork in this PR.

@vincentkoc vincentkoc closed this Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling docker Docker and sandbox tooling docs Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants