Skip to content

[Bug]: Unable to configure the DeepSeek-R1 model in the Ollama framework #5308

@Vector-Cross

Description

@Vector-Cross

Summary

What went wrong?
I have deployed OpenClaw on Armbian, but it cannot call the DeepSeek-R1 model on the local Ollama framework.

Steps to reproduce

  1. ollama list show deepsee-r1 model
Image
  1. this command work well too.
curl http://192.168.1.106:11434/api/generate -d '{ \
  "model": "deepseek-r1:14b", \
  "prompt":"hello?" \
}' 

or this
Image

  1. OpenClaw successfully using other models, using config blow cannot response
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://192.168.1.106:11434/v1",
        "apiKey": "ollama",
        "api": "openai-completions",
        "models": [
          {
            "id": "deepseek-r1:14b",
            "name": "deepseek-r1:14b",
            "reasoning": true,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 32768,
            "maxTokens": 40960
          }
        ]
      }
    }
  },

Expected behavior

What did you expect to happen?
work well

Actual behavior

What actually happened?
not reply

Environment

  • Clawdbot version: 2026.1.29
  • OS: Armbian OS 25.11.0 bookworm
  • Install method (pnpm/npx/docker/etc): npm

Logs or screenshots

Paste relevant logs or add screenshots (redact secrets).
Image
image

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions