Summary
Neither parameter setting for maxTokens work with openrouter as a provider, the call falls back to the default hard limit of 4096 output tokens.
Steps to reproduce
- use the following settings for the model at open router
"id": "openai/gpt-oss-120b:free",
"name": "openai/gpt-oss-120b:free (OpenRouter)",
"api": "openai-completions",
"reasoning": true,
"input": [
"text"
],
"contextWindow": 150000,
"maxTokens": 40000,
"compat": {
"maxTokensField": "max_tokens"
},
"headers": {
"max_tokens": "12000",
"maxTokensField": "max_tokens"
}
},
- select this model as default on the main agent
- ask the agent to generate the output of more than 4096 tokens.
- check the open router log
| Timestamp |
Provider / Model |
App |
Tokens |
Cost |
Speed |
Finish |
Actions |
| Feb 24, 11:11 PM |
Favicon gpt-oss-120b (free) |
OpenClaw |
31,351 >> 4,096 |
0$ |
92.1tps |
Length |
|
Expected behavior
The larger number of tokens than 4096 is output
Actual behavior
4096 tokens max whatever the settings are
OpenClaw version
2026.2.23
Operating system
Ubuntu 24.04, node v22
Install method
pnpm build
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response
Summary
Neither parameter setting for maxTokens work with openrouter as a provider, the call falls back to the default hard limit of 4096 output tokens.
Steps to reproduce
Expected behavior
The larger number of tokens than 4096 is output
Actual behavior
4096 tokens max whatever the settings are
OpenClaw version
2026.2.23
Operating system
Ubuntu 24.04, node v22
Install method
pnpm build
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response