Skip to content

Map /think directives to OpenRouter reasoning.effort API parameter #14664

@realsamrat

Description

@realsamrat

Summary

When using OpenRouter models that support reasoning tokens (e.g. x-ai/grok-4.1-fast, OpenAI o-series, Gemini thinking models), OpenClaw's /think directives (off/low/medium/high) are not mapped to OpenRouter's reasoning.effort API parameter.

Current Behavior

  • /think:high sets thinkingLevel internally but this doesn't get passed through to the OpenRouter API body
  • OpenRouter expects a reasoning object in the request body:
{
  "reasoning": {
    "effort": "high"
  }
}
  • Supported models: Grok (effort levels), Anthropic (max_tokens), Gemini (max_tokens), OpenAI o-series (effort levels)

Expected Behavior

When the provider is openrouter, map OpenClaw's thinking levels to OpenRouter's reasoning parameter:

OpenClaw /think OpenRouter reasoning.effort
off none
minimal minimal
low low
medium medium
high high
xhigh xhigh

For Anthropic models via OpenRouter, could also map to reasoning.max_tokens.

Reference

Use Case

Using openrouter/x-ai/grok-4.1-fast as heartbeat model — want reasoning enabled at the API level, not just as a prompt hint. Grok 4.1 Fast is extremely cheap ($0.20/$0.50 per M tokens) so reasoning tokens are very cost-effective.

Environment

  • OpenClaw version: 2026.2.9
  • Provider: openrouter
  • Model: openrouter/x-ai/grok-4.1-fast

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions