[Bug]: OpenClaw sends deprecated max_tokens parameter to GPT-5.x models, causing persistent 400 errors
Bug type
Behavior bug (incorrect output/state without crash)
Summary
OpenClaw 2026.4.5 sends the deprecated max_tokens parameter when making API calls to OpenAI GPT-5.x models (tested with gpt-5.2). OpenAI's GPT-5.x family requires max_completion_tokens instead. This causes every request to fail with a 400 error:
{
"error": {
"message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.",
"type": "invalid_request_error",
"param": "max_tokens",
"code": "unsupported_parameter"
}
}
Steps to reproduce
- Configure OpenAI provider with a valid API key and
gpt-5.2 model in openclaw.json
- Set
gpt-5.2 as default or switch to it via model override
- Send any message — request fails silently or times out
- Direct
curl to OpenAI API with max_completion_tokens succeeds immediately:
curl -s https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-5.2","messages":[{"role":"user","content":"say hello"}],"max_completion_tokens":10}'
# Returns 200 OK with valid response
- Same request with
max_tokens returns 400:
curl -s https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-5.2","messages":[{"role":"user","content":"say hello"}],"max_tokens":10}'
# Returns 400: "Unsupported parameter: 'max_tokens' is not supported with this model."
Expected behavior
OpenClaw should detect GPT-5.x models and use max_completion_tokens instead of max_tokens in the API request body. Alternatively, support the compat.maxTokensField model config option (referenced in issue #25794) to allow users to override this per-model.
Actual behavior
All requests to GPT-5.x models fail with 400 errors. OpenClaw surfaces this as a timeout or generic failure — no clear error message is shown to the user. Debugging required direct curl testing to isolate the root cause.
Environment
- OpenClaw version: 2026.4.5 (3e72c03)
- OS: macOS 26.3.1 (arm64)
- Node: v22.22.1
- Install method: pnpm (global)
- Gateway: local, LaunchAgent
- OpenAI provider: Direct API key auth, baseUrl
https://api.openai.com/v1
- Affected models:
gpt-5.2, likely gpt-5.4 and all GPT-5.x family
- Unaffected models:
gpt-4.1, gpt-4.1-mini, all Anthropic models, all Google models, all Ollama models
Workaround attempted (FAILED)
Adding "compat": {"maxTokensField": "max_completion_tokens"} to the model definition in models.providers.openai.models[] (as referenced in #25794) was tested and does not work. OpenClaw 2026.4.5 ignores the compat.maxTokensField config entirely. GPT-5.x requests continue to fail with the same 404/400 error after applying this config and restarting the gateway.
Suggested fix
OpenClaw's OpenAI provider adapter should automatically use max_completion_tokens for models in the GPT-5.x family, matching OpenAI's current API requirements.
[Bug]: OpenClaw sends deprecated
max_tokensparameter to GPT-5.x models, causing persistent 400 errorsBug type
Behavior bug (incorrect output/state without crash)
Summary
OpenClaw 2026.4.5 sends the deprecated
max_tokensparameter when making API calls to OpenAI GPT-5.x models (tested withgpt-5.2). OpenAI's GPT-5.x family requiresmax_completion_tokensinstead. This causes every request to fail with a 400 error:{ "error": { "message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", "type": "invalid_request_error", "param": "max_tokens", "code": "unsupported_parameter" } }Steps to reproduce
gpt-5.2model inopenclaw.jsongpt-5.2as default or switch to it via model overridecurlto OpenAI API withmax_completion_tokenssucceeds immediately:max_tokensreturns 400:Expected behavior
OpenClaw should detect GPT-5.x models and use
max_completion_tokensinstead ofmax_tokensin the API request body. Alternatively, support thecompat.maxTokensFieldmodel config option (referenced in issue #25794) to allow users to override this per-model.Actual behavior
All requests to GPT-5.x models fail with 400 errors. OpenClaw surfaces this as a timeout or generic failure — no clear error message is shown to the user. Debugging required direct
curltesting to isolate the root cause.Environment
https://api.openai.com/v1gpt-5.2, likelygpt-5.4and all GPT-5.x familygpt-4.1,gpt-4.1-mini, all Anthropic models, all Google models, all Ollama modelsWorkaround attempted (FAILED)
Adding
"compat": {"maxTokensField": "max_completion_tokens"}to the model definition inmodels.providers.openai.models[](as referenced in #25794) was tested and does not work. OpenClaw 2026.4.5 ignores thecompat.maxTokensFieldconfig entirely. GPT-5.x requests continue to fail with the same 404/400 error after applying this config and restarting the gateway.Suggested fix
OpenClaw's OpenAI provider adapter should automatically use
max_completion_tokensfor models in the GPT-5.x family, matching OpenAI's current API requirements.