Skip to content

fix(agents): skip Ollama discovery when explicit models configured#28793

Closed
lailoo wants to merge 2 commits intoopenclaw:mainfrom
lailoo:fix/ollama-discovery-timeout-28762
Closed

fix(agents): skip Ollama discovery when explicit models configured#28793
lailoo wants to merge 2 commits intoopenclaw:mainfrom
lailoo:fix/ollama-discovery-timeout-28762

Conversation

@lailoo
Copy link
Copy Markdown
Contributor

@lailoo lailoo commented Feb 27, 2026

Summary

Skip Ollama model discovery when explicit models are already configured in models.providers.ollama, preventing unnecessary 5s timeout delays at startup for remote Ollama hosts.

Problem

When users configure explicit models in models.providers.ollama (e.g. for a remote Ollama host), resolveImplicitProviders still calls discoverOllamaModels() which makes an HTTP request to the Ollama API with a 5s timeout. For remote hosts that may be unreachable at startup, this adds a guaranteed 5s delay. The docs state that explicit config should skip auto-discovery, but the code didn't honor this.

Changes

  • src/agents/models-config.providers.ts: Check if explicitProviders.ollama.models is non-empty before running discovery. When explicit models exist, use them directly instead of calling discoverOllamaModels().
  • src/agents/models-config.providers.ollama.test.ts: Add regression test verifying that explicit models are preserved and discovery is skipped.

Test plan

  • Added test: "should skip discovery when explicit models are configured" — passes explicit models via explicitProviders.ollama.models and asserts they are returned as-is without discovery.
  • All existing Ollama provider tests continue to pass.
  • pnpm format:check and pnpm lint pass.

Effect on User Experience

Users with explicit Ollama model configs (especially remote hosts) no longer experience a 5s startup delay from unnecessary model discovery. No behavior change for users relying on auto-discovery (no explicit models configured).

Closes #28762

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Feb 27, 2026

Greptile Summary

Correctly skips Ollama model discovery when explicit models are configured in models.providers.ollama, eliminating unnecessary 5-second timeout delays for remote Ollama hosts.

Key Changes:

  • Added conditional check in src/agents/models-config.providers.ts to detect when explicitProviders.ollama.models is a non-empty array
  • When explicit models exist, the provider config is used directly without calling discoverOllamaModels()
  • When no explicit models are configured, discovery runs as before (maintains backward compatibility)
  • Added comprehensive regression test to verify explicit models are preserved and discovery is skipped

Impact:

  • Users with explicit Ollama configurations (especially remote hosts) no longer experience startup delays
  • No behavior change for users relying on auto-discovery
  • Aligns implementation with documented behavior that explicit config should skip auto-discovery

Confidence Score: 5/5

  • This PR is safe to merge with no identified issues
  • The implementation is straightforward and correct: it checks for non-empty explicit models array before running discovery, uses explicit config when available, and falls back to discovery otherwise. The test properly validates the fix by confirming explicit models are preserved and discovery is skipped. The change maintains backward compatibility and only affects the specific scenario it targets (explicit models configured).
  • No files require special attention

Last reviewed commit: 0e4a046

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8d9fc29f1f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/models-config.providers.ts Outdated
Comment on lines +973 to +975
if (hasExplicitModels) {
// Use explicit models as-is, skip discovery
providers.ollama = { ...explicitOllama, apiKey: ollamaKey };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve Ollama API default when skipping discovery

When explicitProviders.ollama.models is non-empty, this branch copies the explicit provider as-is and no longer injects api: "ollama". Because api is optional in config, valid explicit configs can omit it; with this change those configs end up with no provider/model API, so downstream logic like run/attempt.ts (which only enables native Ollama transport when model.api === "ollama") stops using the Ollama-native path. Before this commit, buildOllamaProvider() supplied the default API even for explicit-model merges.

Useful? React with 👍 / 👎.

@Kansodata
Copy link
Copy Markdown
Contributor

I reproduced the failing check lane and prepared a minimal fix for the TypeScript typing error in the new Ollama test (input: string[] vs (\

@Kansodata
Copy link
Copy Markdown
Contributor

Correction / clean summary:

I reproduced the failing check lane and prepared a minimal fix for the TypeScript typing error in the new Ollama test (input: string[] vs ("text"|"image")[]).

Support PR:

If helpful, I can also fold this into your branch directly.

@lailoo
Copy link
Copy Markdown
Contributor Author

lailoo commented Feb 27, 2026

@Kansodata Thanks for catching the typing issue and preparing the fix! That would be great — please go ahead and fold it into my branch directly. Appreciate the help 🙏

Copy link
Copy Markdown

@nikolasdehor nikolasdehor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct optimization — skipping unnecessary Ollama discovery when explicit models are configured eliminates the 5s timeout. Type fix from #28827 is needed before merge.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fc859b223c

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +973 to +976
if (hasExplicitModels) {
// Use explicit models as-is, skip discovery
providers.ollama = { api: "ollama", ...explicitOllama, apiKey: ollamaKey };
} else {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve default Ollama base URL when skipping discovery

When explicit Ollama models are present, this branch now copies explicitOllama directly and skips buildOllamaProvider(), so configs that intentionally omit models.providers.ollama.baseUrl no longer inherit the previous default http://127.0.0.1:11434. That changes runtime metadata for those models (their baseUrl becomes unset), and downstream local-model logic like isLocalBaseUrl() in src/commands/models/shared.ts will classify them as non-local, causing openclaw models list --local filters to drop Ollama entries that previously appeared.

Useful? React with 👍 / 👎.

@vincentkoc
Copy link
Copy Markdown
Member

Thanks for the implementation here.

I consolidated the active merge path into #28827 (rebased on current main) so we have a single PR to land for #28762. Your work in this PR is preserved as the foundational implementation direction, and credit is called out explicitly in #28827.

I’m closing this PR as superseded by #28827. If there is any behavior gap, say the word and we can reopen/re-check quickly.

@vincentkoc vincentkoc closed this Feb 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Ollama model discovery always runs at startup even when explicit models are configured, causing 5s timeout for remote Ollama hosts

4 participants