-
-
Notifications
You must be signed in to change notification settings - Fork 22
Closed
Labels
bugSomething isn't workingSomething isn't workingdocumentationImprovements or additions to documentationImprovements or additions to documentation
Description
I've been waiting for someone to do this. It's fantastic. I tried the following and it didn't work for me with OpenWebUI:
olla:
image: ghcr.io/thushan/olla:${OLLA_VERSION}
container_name: olla
ports:
- 40114:40114
volumes:
- ./olla.yaml:/config.yaml
ollama:
image: ollama/ollama:${OLLAMA_VERSION}
container_name: ollama
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollamaWith this configuration:
server:
host: 0.0.0.0
port: 40114
proxy:
engine: "olla" # or "olla" for high performance
load_balancer: "priority" # or round-robin, least-connections
discovery:
endpoints:
- name: "server"
url: "http://localhost:11434"
platform: "ollama"
priority: 100 # Higher = preferred
tags:
models: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:IQ2_XXS"
- name: "desktop"
url: "http://other:11434"
platform: "ollama"
priority: 50 # Lower priority fallback
tags:
models: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:IQ2_XXS"And in OpenWebUI conection: http://URL:PORT/olla/ollama
And he does not recognize it ...
PS: I have Ollama and Olla on the same Docker server.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingdocumentationImprovements or additions to documentationImprovements or additions to documentation