-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Bug Description
Summary
When using OpenViking v0.2.8 with Docker and configuring vlm.provider = "litellm" for Z.AI / GLM models, OpenViking appears to rewrite the model name
incorrectly.
LiteLLM docs say Z.AI models should use the zai/ prefix, for example:
zai/glm-4.7zai/glm-4.5
However, in OpenViking v0.2.8, the final model passed to LiteLLM becomes:
zhipu/zai/glm-4.5
which then causes LiteLLM to fail with:
LLM Provider NOT provided
This makes the litellm path for Z.AI / GLM unusable in my deployment.
Environment
- OpenViking version:
v0.2.8 - Deployment: Docker
- OpenClaw plugin connected in remote mode
- Server runs normally on
0.0.0.0:1933
My ov.conf VLM config
{
"vlm": {
"provider": "litellm",
"api_key": "<redacted>",
"model": "zai/glm-4.5",
"api_base": "https://open.bigmodel.cn/api/coding/paas/v4"
}
}
### Steps to Reproduce
1. Start OpenViking with Docker using the config above.
2. Confirm the server is healthy:
```bash
curl http://127.0.0.1:1933/health - Create a tenant user and use a tenant user key for normal session/memory APIs.
- Create a session:
curl -X POST http://127.0.0.1:1933/api/v1/sessions \
-H "X-API-Key: <tenant-user-key>" \
-H "X-OpenViking-Agent: wk-openclaw-main" \
-H "Content-Type: application/json" \
-d '{}' - Add one message into that session:
curl -X POST http://127.0.0.1:1933/api/v1/sessions/<session_id>/messages \
-H "X-API-Key: <tenant-user-key>" \
-H "X-OpenViking-Agent: wk-openclaw-main" \
-H "Content-Type: application/json" \
-d '{"role":"user","content":"用户排查问 题时,习惯先确认权限、鉴权和系统接入是否正常,再去看模型和策略层 问题。"}' - Trigger extract:
curl -X POST http://127.0.0.1:1933/api/v1/sessions/<session_id>/extract \
-H "X-API-Key: <tenant-user-key>" \
-H "X-OpenViking-Agent: wk-openclaw-main" \
-H "Content-Type: application/json" \
-d '{}' - Check Docker logs:
docker logs --tail 200 openviking Expected Behavior
OpenViking should pass a valid Z.AI LiteLLM model name through, e.g.:
- zai/glm-4.5
or otherwise use the correct provider handling expected by current LiteLLM.
Actual Behavior
OpenViking seems to rewrite / resolve the model incorrectly and LiteLLM reports:
litellm.BadRequestError: LLM Provider NOT provided.
You passed model=zhipu/zai/glm-4.5
This happens during memory extraction / semantic processing.
Minimal Reproducible Example
Error Logs
OpenViking Version
0.2.8
Python Version
3.13
Operating System
Linux
Model Backend
None
Additional Context
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working
Type
Projects
Status
Done