-
-
Notifications
You must be signed in to change notification settings - Fork 17.9k
Description
Check Existing Issues
- I have searched for any existing and/or related issues.
- I have searched for any existing and/or related discussions.
- I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
- I am using the latest version of Open WebUI.
Installation Method
Docker
Open WebUI Version
v0.7.2
Ollama Version (if applicable)
v0.14.3
Operating System
Linux Mint 19.1
Browser (if applicable)
No response
Confirmation
- I have read and followed all instructions in
README.md. - I am using the latest version of both Open WebUI and Ollama.
- I have included the browser console logs.
- I have included the Docker container logs.
- I have provided every relevant configuration, setting, and environment variable used in my setup.
- I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
- I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
- Start with the initial platform/version/OS and dependencies used,
- Specify exact install/launch/configure commands,
- List URLs visited, user input (incl. example values/emails/passwords if needed),
- Describe all options and toggles enabled or changed,
- Include any files or environmental changes,
- Identify the expected and actual result at each stage,
- Ensure any reasonably skilled user can follow and hit the same issue.
Expected Behavior
When I ask the model to do something in opencode it calls a tool and after that it gives the answer I asked, this works when I use the model directly from the Ollama API as shown below:
user@host dir % opencode run "Create the file 'README' with the contents 'hello world', let me know if it succeeded" --model ollama/glm-4.7-flash:bf16-80k
I'll create the README file with the specified content.
| Write Users/user/dir/README
README created successfully with content "hello world".
Actual Behavior
But when I use the same model from Ollama but route it trough the WebUI API, it always stops the execution after a tool call shown below, it correctly creates the file but after that it stops. This happens all the time in opencode with this configuration every time it calls a tool it stops execution and I have to manually type continue after each tool call
user@host dir % opencode run "Create the file 'README' with the contents 'hello world', let me know if it succeeded" --model webui/glm-4.7-flash:bf16-80k
| Write Users/user/dir/README
Steps to Reproduce
- Setup ollama (v0.14.3) and install the model glm-4.7-flash:bf16 with a context_size of 80_000 (it can be smaller).
- add that Ollama service to WebUI as a connection, so the model is accessible there
- Enable API access for users
- Install opencode (1.1.20)
- configure opencode to be able to use the model trough open WebUI
example opencode config:
{
"$schema": "https://opencode.ai/config.json",
"permission": {
"edit": "allow",
"bash": "ask",
"webfetch": "ask",
"doom_loop": "ask",
"external_directory": "ask"
},
"provider": {
"webui": {
"npm": "@ai-sdk/openai-compatible",
"name": "WebUI",
"options": {
"baseURL": "http://{{ip_webui}}:{{port_webui}}/api/v1",
"apiKey": "{{key}}"
},
"models": {
"glm-4.7-flash:bf16-80k": {
"name": "GLM 4.7 Flash"
}
}
},
}
Logs & Screenshots
No weird logs:
2026-01-23 15:20:21.170 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56959 - "GET /api/v1/groups/ HTTP/1.1" 200
2026-01-23 15:20:21.172 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56956 - "GET /api/v1/auths/admin/config/ldap HTTP/1.1" 200
2026-01-23 15:20:21.182 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56955 - "GET /api/version/updates HTTP/1.1" 200
2026-01-23 15:20:21.194 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56958 - "GET /api/version/updates HTTP/1.1" 200
2026-01-23 15:20:21.284 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56957 - "GET /api/v1/auths/admin/config/ldap HTTP/1.1" 200
2026-01-23 15:20:31.788 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56961 - "GET /_app/version.json HTTP/1.1" 304
2026-01-23 15:21:31.788 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56967 - "GET /_app/version.json HTTP/1.1" 304
2026-01-23 15:22:31.782 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56969 - "GET /_app/version.json HTTP/1.1" 304
2026-01-23 15:23:31.779 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56970 - "GET /_app/version.json HTTP/1.1" 304
2026-01-23 15:24:31.781 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56973 - "GET /_app/version.json HTTP/1.1" 304
2026-01-23 15:25:31.786 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56974 - "GET /_app/version.json HTTP/1.1" 304
2026-01-23 15:26:31.790 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56976 - "GET /_app/version.json HTTP/1.1" 304
2026-01-23 15:27:18.064 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:60853 - "POST /api/v1/chat/completions HTTP/1.1" 200
2026-01-23 15:27:31.786 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.55.163.237:56977 - "GET /_app/version.json HTTP/1.1" 304
Additional Information
config for the model in webUI is all default: glm-4.7-flash_bf16-80k-1769159953751.json