Skip to content

[Bug]: OpenClaw sends deprecated max_tokens parameter to GPT-5.x models, causing persistent 400 errors #62130

@Milo-Hamweystudios

Description

@Milo-Hamweystudios

[Bug]: OpenClaw sends deprecated max_tokens parameter to GPT-5.x models, causing persistent 400 errors

Bug type

Behavior bug (incorrect output/state without crash)

Summary

OpenClaw 2026.4.5 sends the deprecated max_tokens parameter when making API calls to OpenAI GPT-5.x models (tested with gpt-5.2). OpenAI's GPT-5.x family requires max_completion_tokens instead. This causes every request to fail with a 400 error:

{
  "error": {
    "message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.",
    "type": "invalid_request_error",
    "param": "max_tokens",
    "code": "unsupported_parameter"
  }
}

Steps to reproduce

  1. Configure OpenAI provider with a valid API key and gpt-5.2 model in openclaw.json
  2. Set gpt-5.2 as default or switch to it via model override
  3. Send any message — request fails silently or times out
  4. Direct curl to OpenAI API with max_completion_tokens succeeds immediately:
curl -s https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-5.2","messages":[{"role":"user","content":"say hello"}],"max_completion_tokens":10}'
# Returns 200 OK with valid response
  1. Same request with max_tokens returns 400:
curl -s https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-5.2","messages":[{"role":"user","content":"say hello"}],"max_tokens":10}'
# Returns 400: "Unsupported parameter: 'max_tokens' is not supported with this model."

Expected behavior

OpenClaw should detect GPT-5.x models and use max_completion_tokens instead of max_tokens in the API request body. Alternatively, support the compat.maxTokensField model config option (referenced in issue #25794) to allow users to override this per-model.

Actual behavior

All requests to GPT-5.x models fail with 400 errors. OpenClaw surfaces this as a timeout or generic failure — no clear error message is shown to the user. Debugging required direct curl testing to isolate the root cause.

Environment

  • OpenClaw version: 2026.4.5 (3e72c03)
  • OS: macOS 26.3.1 (arm64)
  • Node: v22.22.1
  • Install method: pnpm (global)
  • Gateway: local, LaunchAgent
  • OpenAI provider: Direct API key auth, baseUrl https://api.openai.com/v1
  • Affected models: gpt-5.2, likely gpt-5.4 and all GPT-5.x family
  • Unaffected models: gpt-4.1, gpt-4.1-mini, all Anthropic models, all Google models, all Ollama models

Workaround attempted (FAILED)

Adding "compat": {"maxTokensField": "max_completion_tokens"} to the model definition in models.providers.openai.models[] (as referenced in #25794) was tested and does not work. OpenClaw 2026.4.5 ignores the compat.maxTokensField config entirely. GPT-5.x requests continue to fail with the same 404/400 error after applying this config and restarting the gateway.

Suggested fix

OpenClaw's OpenAI provider adapter should automatically use max_completion_tokens for models in the GPT-5.x family, matching OpenAI's current API requirements.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions