Skip to content

[Bug] reasoning_effort not sent for gpt-5.4 in isolated cron sessions #63369

@rikimasorati

Description

@rikimasorati

Environment

  • OC version: 2026.4.5
  • OS: Linux (Ubuntu 24.04)
  • Provider: openai (api: openai-responses)
  • Model: gpt-5.4

Description

Isolated cron sessions with openai/gpt-5.4 produce zero reasoning tokens. OC does not send reasoning.effort to the OpenAI Responses API, despite the session having thinkingLevel: high and model params having reasoning_effort: "high".

Direct API call works. Calling the OpenAI Responses API directly (bypassing OC) with reasoning.effort: "high" returns 90 reasoning tokens. The model supports it. OC just isn't sending it.

Mini models work. openai-mini/gpt-5.1-codex-mini in the same session type, same gateway, correctly receives reasoning_effort and returns 15,000+ reasoning tokens.

Reproduction

# 1. Configure model params:
# agents.defaults.models.openai/gpt-5.4 = {"params": {"reasoning_effort": "high"}}
# agents.defaults.thinkingDefault = "medium"

# 2. Schedule isolated cron job:
openclaw cron add --name test-reasoning --at 1m --delete-after-run \
  --session isolated --model openai/gpt-5.4 --thinking high \
  --no-deliver --message "What is 2+2? Show reasoning."

# 3. After job completes, check session JSONL:
# - thinkingLevel: "high" is set correctly in session header
# - Assistant response has ZERO type:"thinking" content blocks
# - Usage shows total_tokens = input_tokens (no reasoning overhead)

# 4. For comparison, same test with mini:
openclaw cron add --name test-mini --at 1m --delete-after-run \
  --session isolated --model openai-mini/gpt-5.1-codex-mini --thinking medium \
  --no-deliver --message "What is 2+2? Show reasoning."
# Result: 15,000+ reasoning tokens, multiple thinking blocks

Direct API verification

Calling the OpenAI Responses API directly from the same server, same API key:

payload = {
    "model": "gpt-5.4",
    "input": [{"role": "user", "content": "What is 2+2? Think step by step."}],
    "reasoning": {"effort": "high"}
}
# Result: 90 reasoning tokens, reasoning output item present

This confirms the OpenAI API supports reasoning for gpt-5.4. The issue is that OC does not forward the reasoning configuration to the API.

Session evidence

Session JSONL shows:

{"type": "thinking_level_change", "thinkingLevel": "high"}
{"type": "custom", "customType": "model-snapshot", "data": {"provider": "openai", "modelApi": "openai-responses", "modelId": "gpt-5.4"}}

But the assistant response contains only type: "text" blocks, no type: "thinking" blocks.

What we tested (all failed for gpt-5.4)

  • Different providers (openai, openai-mini): no effect
  • Removing store:true from params: no effect
  • Setting compat.supportsStore: false on model: no effect
  • Changing thinkingDefault from adaptive to medium: no effect
  • All combinations of the above: no effect

The only model that works is gpt-5.1-codex-mini. All full GPT-5 models (gpt-5.4, gpt-5.3-codex) fail regardless of provider or configuration.

Expected behavior

OC should forward reasoning.effort to the OpenAI Responses API when thinkingLevel is set and the model has reasoning: true.

Possibly related

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions