Feature/remote ollama - enable autodiscovery ollama models on hosts other than localhost#10742
Feature/remote ollama - enable autodiscovery ollama models on hosts other than localhost#10742hillct wants to merge 6 commits intoopenclaw:mainfrom
Conversation
…ocalhost, while retaining autodiscovery capability. Addressed config file validation and ollama server timeouts with inteligant retry. Added tests for config validation. Added the option --discover to openclaw models list command to facilitate display of discoverable models where local discovery was previously failing silently due to connection timeouts. The --discover option essentially increases the timeouts, avoiding the risk of silent failure. That said, the previously existing behavior remains unchanged.
|
This might address my issues of connecting to my remote ollama host. |
It looks like last night there was a commit upstream that may have prevented merges for the moment. I'll address that in the next day or so, but in the meantime, please do checkout my branch and see if it works for you. I was solving for my specific problem related to running Obama on my docker host hardware rather than within the docker container itself, which of course precluded use of localhost, but I've also tested it using instances running on separate Nvidia Dev boxes across my network which sounds more like your use case. Try it out, let me know if there are any issues you run into and let's see if we can get this thing. Merged |
|
2026.2.14 fixed my issues with remote Ollama |
bfc1ccb to
f92900f
Compare
|
This pull request has been automatically marked as stale due to inactivity. |
|
Merging recent upstream changes should be simple enough. I'll take a shot at it this weekend |
|
Superseded by #29201 for the immediate mergeable Ollama autodiscovery hardening path. Thanks for the remote-host exploration and prior work here. |
It became aparent that use of Ollama model autodiscovery was being hindered vy the limitation that autodiscovery only worked on localhost. I believe this to be a more robust and up-to-date implementation that should deprecate @koushikkethamakka's excellany start on this from a few days ago in PR #8693
This implementation adds inteligent timeouts and retries as well as adding the --discover argument to the
openclaw models listcommand which effectively increases timeouts and implements retries so we don't see the silent failures we were seeing previously. The behavior absent this argument remains unchanged.More critically, this implementation cleanly addresses the partial definition of an ollama provider in
~/.openclaw/openclaw.jsonsuch that users can define the apiHost and still retain robust model autodiscovery features AND support for definition of the OLLAMA_API_BASE_URLwhich will in conjunction with theOLLAMA_API_KEY` environment variable (which continues to operate as intended) causes the base URL to be configurable absent any explicit ollama provider configuration in the main config file. This directly addresses Issue #8663