Summary
/think max fails with "Unsupported thinking level 'max' for this model" even though the underlying provider (Ollama) supports reasoning_effort: "max" perfectly fine.
Bug Description
Expected Behavior
When using /think max with a model that supports reasoning (e.g., deepseek-v4-flash:cloud via Ollama), OpenClaw should either:
- Accept
"max" as a valid thinking level, or
- Map it to the highest supported level and send it to the provider
Actual Behavior
OpenClaw rejects "max" with error:
Unsupported thinking level "max" for this model. Valid levels: off, minimal, low, medium, high
This happens before any request is sent to the provider. The validation is entirely local/hardcoded.
Evidence
Direct curl to Ollama works perfectly
curl -X POST "https://ollama.com/api/chat" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OLLAMA_API_KEY" \
-d '{
"model": "deepseek-v4-flash:cloud",
"messages": [{"role": "user", "content": "Say hi briefly"}],
"options": {"reasoning_effort": "max"}
}'
Result: ✅ Works. Model accepts "max" and returns thinking content.
OpenClaw rejects it locally
Same model via OpenClaw (ollama/deepseek-v4-flash:cloud):
/think max
→ Unsupported thinking level "max" for this model. Valid levels: off, minimal, low, medium, high
Root Cause Analysis
The thinking level validation in OpenClaw is hardcoded:
// thinking-HCcyCHV2.js
const BASE_THINKING_LEVELS = [
"off",
"minimal",
"low",
"medium",
"high"
];
When resolveThinkingProfile() is called for a generic provider (not OpenAI/Anthropic), it uses buildBaseThinkingProfile() which only includes these 5 levels. "max" and "xhigh" are never added for Ollama models.
Additionally, attempts to workaround via model config fail:
"compat": {
"supportsReasoningEffort": true,
"supportedReasoningEfforts": ["low", "medium", "high", "max"]
}
OpenClaw rewrites the config and removes supportedReasoningEfforts.
Environment
- OpenClaw version: v2026.4.23
- Model:
deepseek-v4-flash:cloud (via Ollama)
- Provider:
ollama
Suggested Fixes
- Short-term: Allow
"max" as an alias for "high" when the model has reasoning: true but doesn't have a custom thinking profile
- Medium-term: Respect
compat.supportedReasoningEfforts from model config instead of hardcoding levels
- Long-term: Query the provider for supported reasoning levels rather than maintaining a local allowlist
Workaround
Use /think high instead of /think max. The model responds correctly to "high" via Ollama.
Summary
/think maxfails with"Unsupported thinking level 'max' for this model"even though the underlying provider (Ollama) supportsreasoning_effort: "max"perfectly fine.Bug Description
Expected Behavior
When using
/think maxwith a model that supports reasoning (e.g.,deepseek-v4-flash:cloudvia Ollama), OpenClaw should either:"max"as a valid thinking level, orActual Behavior
OpenClaw rejects
"max"with error:This happens before any request is sent to the provider. The validation is entirely local/hardcoded.
Evidence
Direct curl to Ollama works perfectly
Result: ✅ Works. Model accepts
"max"and returns thinking content.OpenClaw rejects it locally
Same model via OpenClaw (
ollama/deepseek-v4-flash:cloud):Root Cause Analysis
The thinking level validation in OpenClaw is hardcoded:
When
resolveThinkingProfile()is called for a generic provider (not OpenAI/Anthropic), it usesbuildBaseThinkingProfile()which only includes these 5 levels."max"and"xhigh"are never added for Ollama models.Additionally, attempts to workaround via model config fail:
OpenClaw rewrites the config and removes
supportedReasoningEfforts.Environment
deepseek-v4-flash:cloud(via Ollama)ollamaSuggested Fixes
"max"as an alias for"high"when the model hasreasoning: truebut doesn't have a custom thinking profilecompat.supportedReasoningEffortsfrom model config instead of hardcoding levelsWorkaround
Use
/think highinstead of/think max. The model responds correctly to"high"via Ollama.