fix(agents): skip Ollama discovery when explicit models configured#28793
fix(agents): skip Ollama discovery when explicit models configured#28793lailoo wants to merge 2 commits intoopenclaw:mainfrom
Conversation
Greptile SummaryCorrectly skips Ollama model discovery when explicit models are configured in Key Changes:
Impact:
Confidence Score: 5/5
Last reviewed commit: 0e4a046 |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8d9fc29f1f
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (hasExplicitModels) { | ||
| // Use explicit models as-is, skip discovery | ||
| providers.ollama = { ...explicitOllama, apiKey: ollamaKey }; |
There was a problem hiding this comment.
Preserve Ollama API default when skipping discovery
When explicitProviders.ollama.models is non-empty, this branch copies the explicit provider as-is and no longer injects api: "ollama". Because api is optional in config, valid explicit configs can omit it; with this change those configs end up with no provider/model API, so downstream logic like run/attempt.ts (which only enables native Ollama transport when model.api === "ollama") stops using the Ollama-native path. Before this commit, buildOllamaProvider() supplied the default API even for explicit-model merges.
Useful? React with 👍 / 👎.
|
I reproduced the failing check lane and prepared a minimal fix for the TypeScript typing error in the new Ollama test (input: string[] vs (\ |
|
Correction / clean summary: I reproduced the failing Support PR: If helpful, I can also fold this into your branch directly. |
|
@Kansodata Thanks for catching the typing issue and preparing the fix! That would be great — please go ahead and fold it into my branch directly. Appreciate the help 🙏 |
nikolasdehor
left a comment
There was a problem hiding this comment.
Correct optimization — skipping unnecessary Ollama discovery when explicit models are configured eliminates the 5s timeout. Type fix from #28827 is needed before merge.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fc859b223c
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (hasExplicitModels) { | ||
| // Use explicit models as-is, skip discovery | ||
| providers.ollama = { api: "ollama", ...explicitOllama, apiKey: ollamaKey }; | ||
| } else { |
There was a problem hiding this comment.
Preserve default Ollama base URL when skipping discovery
When explicit Ollama models are present, this branch now copies explicitOllama directly and skips buildOllamaProvider(), so configs that intentionally omit models.providers.ollama.baseUrl no longer inherit the previous default http://127.0.0.1:11434. That changes runtime metadata for those models (their baseUrl becomes unset), and downstream local-model logic like isLocalBaseUrl() in src/commands/models/shared.ts will classify them as non-local, causing openclaw models list --local filters to drop Ollama entries that previously appeared.
Useful? React with 👍 / 👎.
|
Thanks for the implementation here. I consolidated the active merge path into #28827 (rebased on current I’m closing this PR as superseded by #28827. If there is any behavior gap, say the word and we can reopen/re-check quickly. |
Summary
Skip Ollama model discovery when explicit models are already configured in
models.providers.ollama, preventing unnecessary 5s timeout delays at startup for remote Ollama hosts.Problem
When users configure explicit models in
models.providers.ollama(e.g. for a remote Ollama host),resolveImplicitProvidersstill callsdiscoverOllamaModels()which makes an HTTP request to the Ollama API with a 5s timeout. For remote hosts that may be unreachable at startup, this adds a guaranteed 5s delay. The docs state that explicit config should skip auto-discovery, but the code didn't honor this.Changes
src/agents/models-config.providers.ts: Check ifexplicitProviders.ollama.modelsis non-empty before running discovery. When explicit models exist, use them directly instead of callingdiscoverOllamaModels().src/agents/models-config.providers.ollama.test.ts: Add regression test verifying that explicit models are preserved and discovery is skipped.Test plan
explicitProviders.ollama.modelsand asserts they are returned as-is without discovery.pnpm format:checkandpnpm lintpass.Effect on User Experience
Users with explicit Ollama model configs (especially remote hosts) no longer experience a 5s startup delay from unnecessary model discovery. No behavior change for users relying on auto-discovery (no explicit models configured).
Closes #28762