-
-
Notifications
You must be signed in to change notification settings - Fork 52.6k
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
Summary
When using OpenRouter models that support reasoning tokens (e.g. x-ai/grok-4.1-fast, OpenAI o-series, Gemini thinking models), OpenClaw's /think directives (off/low/medium/high) are not mapped to OpenRouter's reasoning.effort API parameter.
Current Behavior
/think:highsetsthinkingLevelinternally but this doesn't get passed through to the OpenRouter API body- OpenRouter expects a
reasoningobject in the request body:
{
"reasoning": {
"effort": "high"
}
}- Supported models: Grok (effort levels), Anthropic (max_tokens), Gemini (max_tokens), OpenAI o-series (effort levels)
Expected Behavior
When the provider is openrouter, map OpenClaw's thinking levels to OpenRouter's reasoning parameter:
OpenClaw /think |
OpenRouter reasoning.effort |
|---|---|
off |
none |
minimal |
minimal |
low |
low |
medium |
medium |
high |
high |
xhigh |
xhigh |
For Anthropic models via OpenRouter, could also map to reasoning.max_tokens.
Reference
- OpenRouter reasoning docs: https://openrouter.ai/docs/guides/best-practices/reasoning-tokens
- OpenRouter normalizes
reasoning.effortacross providers (Grok, OpenAI, Gemini, Anthropic) - The
reasoning.excludeoption could also be useful (model reasons internally but doesn't return reasoning tokens)
Use Case
Using openrouter/x-ai/grok-4.1-fast as heartbeat model — want reasoning enabled at the API level, not just as a prompt hint. Grok 4.1 Fast is extremely cheap ($0.20/$0.50 per M tokens) so reasoning tokens are very cost-effective.
Environment
- OpenClaw version: 2026.2.9
- Provider: openrouter
- Model:
openrouter/x-ai/grok-4.1-fast
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request