Skip to content

[Bug]: OpenViking v0.2.8 litellm VLM backend rewrites Z.AI model prefix incorrectly (zai/... → zhipu/zai/...) #784

@wsleepybear

Description

@wsleepybear

Bug Description

Summary

When using OpenViking v0.2.8 with Docker and configuring vlm.provider = "litellm" for Z.AI / GLM models, OpenViking appears to rewrite the model name
incorrectly.

LiteLLM docs say Z.AI models should use the zai/ prefix, for example:

  • zai/glm-4.7
  • zai/glm-4.5

However, in OpenViking v0.2.8, the final model passed to LiteLLM becomes:

  • zhipu/zai/glm-4.5

which then causes LiteLLM to fail with:

  • LLM Provider NOT provided

This makes the litellm path for Z.AI / GLM unusable in my deployment.


Environment

  • OpenViking version: v0.2.8
  • Deployment: Docker
  • OpenClaw plugin connected in remote mode
  • Server runs normally on 0.0.0.0:1933

My ov.conf VLM config

{                                                                                                                                                         
"vlm": {                                                                                                                                                  
"provider": "litellm",                                                                                                                                    
"api_key": "<redacted>",                                                                                                                                  
"model": "zai/glm-4.5",                                                                                                                                   
"api_base": "https://open.bigmodel.cn/api/coding/paas/v4"                                                                                                 
}                                                                                                                                                         
}   

### Steps to Reproduce

1. Start OpenViking with Docker using the config above.                                                                                                     
2. Confirm the server is healthy:                                                                                                                           
```bash                                                                                                                                                   
  curl http://127.0.0.1:1933/health                                                                                                                       
  1. Create a tenant user and use a tenant user key for normal session/memory APIs.
  2. Create a session:
  curl -X POST http://127.0.0.1:1933/api/v1/sessions \                                                                                                    
    -H "X-API-Key: <tenant-user-key>" \                                                                                                                   
    -H "X-OpenViking-Agent: wk-openclaw-main" \                                                                                                           
    -H "Content-Type: application/json" \                                                                                                                 
    -d '{}'                                                                                                                                               
  1. Add one message into that session:
  curl -X POST http://127.0.0.1:1933/api/v1/sessions/<session_id>/messages \                                                                              
    -H "X-API-Key: <tenant-user-key>" \                                                                                                                   
    -H "X-OpenViking-Agent: wk-openclaw-main" \                                                                                                           
    -H "Content-Type: application/json" \                                                                                                                 
    -d '{"role":"user","content":"用户排查问 题时,习惯先确认权限、鉴权和系统接入是否正常,再去看模型和策略层 问题。"}'                                   
  1. Trigger extract:
  curl -X POST http://127.0.0.1:1933/api/v1/sessions/<session_id>/extract \                                                                               
    -H "X-API-Key: <tenant-user-key>" \                                                                                                                   
    -H "X-OpenViking-Agent: wk-openclaw-main" \                                                                                                           
    -H "Content-Type: application/json" \                                                                                                                 
    -d '{}'                                                                                                                                               
  1. Check Docker logs:
  docker logs --tail 200 openviking                                                                                                                       

Expected Behavior

OpenViking should pass a valid Z.AI LiteLLM model name through, e.g.:

  • zai/glm-4.5

or otherwise use the correct provider handling expected by current LiteLLM.

Actual Behavior

OpenViking seems to rewrite / resolve the model incorrectly and LiteLLM reports:

  litellm.BadRequestError: LLM Provider NOT provided.                                                                                                       
  You passed model=zhipu/zai/glm-4.5                                                                                                                        

This happens during memory extraction / semantic processing.

Minimal Reproducible Example

Error Logs

OpenViking Version

0.2.8

Python Version

3.13

Operating System

Linux

Model Backend

None

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions