Skip to content

CLI prints Ollama could not be reached at http://127.0.0.1:11434. to stderr on every invocation, even when Ollama is not a configured provider #77942

@jvpflum

Description

@jvpflum

Version

  • openclaw 2026.4.20 (115f05d) (also reproduces against latest 2026.5.4 per CHANGELOG inspection)
  • Windows 11, Node.js (via nvm4w), PowerShell 7

Summary

On every CLI invocation — including non-interactive --json reads — the CLI emits

Ollama could not be reached at http://127.0.0.1:11434.

to stderr, regardless of whether Ollama is in agents.defaults.model.primary, agents.defaults.model.fallbacks, the model catalog, or any auth profile. There is no ollama/* model anywhere in my configuration.

This corrupts stderr capture in any programmatic consumer (frontends, CI scripts, GitHub Actions runners, daemon supervisors) and forces every consumer to filter the noise.

Reproduction

With no Ollama configuration anywhere in ~/.openclaw/openclaw.json:

> openclaw agents list --json 2>&1 | Out-String
[
  { "id": "main", "model": "openai/gpt-5.4-mini", ... },
  ...
]
node.exe : Ollama could not be reached at http://127.0.0.1:11434.

> openclaw sessions --json 2>&1 | Out-String
{ "path": "...", "count": 69, "sessions": [...] }
node.exe : Ollama could not be reached at http://127.0.0.1:11434.

> openclaw models list --json 2>&1 | Out-String
node.exe : Ollama could not be reached at http://127.0.0.1:11434.
{ "count": 7, "models": [...] }

models list --json returns 7 models, none of them Ollama: openai/gpt-5.4-mini, openai/gpt-4o, vllm/nvidia/Qwen3-30B-A3B-NVFP4, openai/gpt-5.4, openai/gpt-4o-mini, openai/o4-mini, nvidia/Qwen3-30B-A3B-NVFP4.

Expected behavior

The Ollama provider probe should run only when Ollama is referenced by the active agent's model chain or auth profiles. If a probe must run for catalog refresh, its unreachable-host message should be at debug level, not warn/stderr.

Impact

We ship a desktop frontend (Crystal: https://github.com/jvpflum/Crystal) on top of OpenClaw and currently filter this string in two places to avoid leaking the warning into user-facing chat output and into gateway-log views. Every JSON consumer needs the same workaround. A literal-string search for could not be reached at http://127.0.0.1:11434 will hit other downstream tools too.

Suggested fixes (any one is sufficient)

  1. Demote unconfigured probes to debug. If no ollama/* reference is found in the resolved model chain, suppress the warning entirely.
  2. Honor OPENCLAW_LOG_LEVEL. Currently the warning prints regardless of level.
  3. Probe lazily. Only probe Ollama on the first dispatch that targets it, not on every CLI startup.

Happy to send a PR for option 1 if there's appetite.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions