Skip to content

[Bug]: maxTokens model param is ignored with openrouter as a provider #25794

@Const011

Description

@Const011

Summary

Neither parameter setting for maxTokens work with openrouter as a provider, the call falls back to the default hard limit of 4096 output tokens.

Steps to reproduce

  1. use the following settings for the model at open router
            "id": "openai/gpt-oss-120b:free",
            "name": "openai/gpt-oss-120b:free (OpenRouter)",
            "api": "openai-completions",
            "reasoning": true,
            "input": [
              "text"
            ],
            "contextWindow": 150000,
            "maxTokens": 40000,
            "compat": {
              "maxTokensField": "max_tokens"
            },
            "headers": {
              "max_tokens": "12000",
              "maxTokensField": "max_tokens"
            }
          },
  1. select this model as default on the main agent
  2. ask the agent to generate the output of more than 4096 tokens.
  3. check the open router log
Timestamp Provider / Model App Tokens Cost Speed Finish Actions
Feb 24, 11:11 PM Favicon gpt-oss-120b (free) OpenClaw 31,351 >> 4,096 0$ 92.1tps Length

Expected behavior

The larger number of tokens than 4096 is output

Actual behavior

4096 tokens max whatever the settings are

OpenClaw version

2026.2.23

Operating system

Ubuntu 24.04, node v22

Install method

pnpm build

Logs, screenshots, and evidence

Impact and severity

No response

Additional information

No response

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions