Skip to content

[Bug]: Ollama Discovery Always Uses Hardcoded Localhost URL, Ignoring Config #8663

@manujunk

Description

@manujunk

Summary

The gateway's Ollama model discovery mechanism has a hardcoded localhost URL (http://127.0.0.1:11434) instead of reading from the Ollama provider config. This causes discovery to fail silently for remote Ollama instances, triggering fallback to cloud models (Anthropic/OpenAI) even when the remote Ollama is reachable and working.

Environment

  • OS: macOS (arm64)
  • OpenClaw Version: 2026.2.2-3
  • Node: v25.5.0
  • Channel: stable

Describe the Bug

When a user configures OpenClaw to use a remote Ollama instance, model discovery fails because the code attempts to connect to http://127.0.0.1:11434 (localhost) instead of the configured remote address.

Evidence

Hardcoded URL in source:

const OLLAMA_API_BASE_URL = "http://127.0.0.1:11434";  // ← HARDCODED!

async function discoverOllamaModels() {
  try {
    const response = await fetch(\`${OLLAMA_API_BASE_URL}/api/tags\`, { 
      signal: AbortSignal.timeout(5e3) 
    });
  }
}

Actual config (ignored by discovery):

{
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://[remote-address]:11434"
      }
    }
  }
}

Gateway logs show repeated failures:

Failed to discover Ollama models: TypeError: fetch failed
[openclaw] Suppressed AbortError: This operation was aborted

Expected Behavior

Discovery should read the configured Ollama baseUrl from config, not hardcode localhost. Actual model inference works correctly with remote Ollama—only the discovery mechanism is broken.

Impact

  • Severity: High
  • Root Cause: Discovery hardcodes localhost regardless of config
  • User Impact: Remote Ollama setups silently fall back to cloud APIs even though local models are working
  • Workaround: Manually override model per-session: model=ollama/llama3.1:latest

Steps to Reproduce

  1. Install OpenClaw on machine A
  2. Run Ollama on machine B (separate network address)
  3. Configure OpenClaw to use remote Ollama: baseUrl: http://[remote-address]:11434
  4. Start a chat session
  5. Check model: session_status → shows cloud model (e.g., Claude) instead of Ollama
  6. Check gateway logs: ~/.openclaw/logs/gateway.err.log → shows discovery failures

Proposed Fix

Pass the configured Ollama baseUrl to the discoverOllamaModels() function instead of using a hardcoded localhost value. Consider env var fallback for flexibility.

Files Affected

  • Source: packages/gateway/src/model-selection.ts (or similar)
  • Compiled: dist/model-selection-qIT4GiGk.js

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions