Skip to content

/think max rejected for ollama/deepseek-v4-flash:cloud despite provider support #71584

@g0st1n

Description

@g0st1n

Summary

/think max fails with "Unsupported thinking level 'max' for this model" even though the underlying provider (Ollama) supports reasoning_effort: "max" perfectly fine.

Bug Description

Expected Behavior

When using /think max with a model that supports reasoning (e.g., deepseek-v4-flash:cloud via Ollama), OpenClaw should either:

  • Accept "max" as a valid thinking level, or
  • Map it to the highest supported level and send it to the provider

Actual Behavior

OpenClaw rejects "max" with error:

Unsupported thinking level "max" for this model. Valid levels: off, minimal, low, medium, high

This happens before any request is sent to the provider. The validation is entirely local/hardcoded.

Evidence

Direct curl to Ollama works perfectly

curl -X POST "https://ollama.com/api/chat" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OLLAMA_API_KEY" \
  -d '{
    "model": "deepseek-v4-flash:cloud",
    "messages": [{"role": "user", "content": "Say hi briefly"}],
    "options": {"reasoning_effort": "max"}
  }'

Result: ✅ Works. Model accepts "max" and returns thinking content.

OpenClaw rejects it locally

Same model via OpenClaw (ollama/deepseek-v4-flash:cloud):

/think max
→ Unsupported thinking level "max" for this model. Valid levels: off, minimal, low, medium, high

Root Cause Analysis

The thinking level validation in OpenClaw is hardcoded:

// thinking-HCcyCHV2.js
const BASE_THINKING_LEVELS = [
    "off",
    "minimal", 
    "low",
    "medium",
    "high"
];

When resolveThinkingProfile() is called for a generic provider (not OpenAI/Anthropic), it uses buildBaseThinkingProfile() which only includes these 5 levels. "max" and "xhigh" are never added for Ollama models.

Additionally, attempts to workaround via model config fail:

"compat": {
  "supportsReasoningEffort": true,
  "supportedReasoningEfforts": ["low", "medium", "high", "max"]
}

OpenClaw rewrites the config and removes supportedReasoningEfforts.

Environment

  • OpenClaw version: v2026.4.23
  • Model: deepseek-v4-flash:cloud (via Ollama)
  • Provider: ollama

Suggested Fixes

  1. Short-term: Allow "max" as an alias for "high" when the model has reasoning: true but doesn't have a custom thinking profile
  2. Medium-term: Respect compat.supportedReasoningEfforts from model config instead of hardcoding levels
  3. Long-term: Query the provider for supported reasoning levels rather than maintaining a local allowlist

Workaround

Use /think high instead of /think max. The model responds correctly to "high" via Ollama.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions