Skip to content

[Bug]: openai-codex provider (chatgpt.com/backend-api) never emits function_call in main agent sessions — model receives no tool definitions #65152

@bellarivialabs

Description

@bellarivialabs

Bug type

Behavior bug (incorrect output/state without crash)

Environment

  • OpenClaw version: 2026.4.9 (0512059)
  • OS: Ubuntu 24.04 (EC2)
  • Model: openai-codex/gpt-5.3-codex (primary), OAuth mode
  • Channel: Telegram
  • Config: tools.exec.host: gateway

Summary

When using openai-codex/gpt-5.3-codex as the primary model via OAuth, the agent never emits function_call items in main agent sessions (Telegram DM). Every response contains only text content with stopReason: "stop". The agent verbally promises to execute tasks but never calls any tools (exec, process, web_search, etc.).

Evidence from session logs

4 user messages, 4 assistant responses, 0 tool calls:

ASSISTANT api=openai-codex-responses model=gpt-5.3-codex stop=stop
  content_types=["text"]    ← text only, zero function_call
  usage: in=17562 out=26 cache_r=0

ASSISTANT api=openai-codex-responses model=gpt-5.3-codex stop=stop  
  content_types=["text"]
  usage: in=368 out=80 cache_r=17408

Input token count is suspiciously small (~18K total context). If tool definitions were properly injected, tool schemas alone would add several thousand tokens. This strongly suggests tools are not being sent in the API request payload.

Root cause analysis

OpenClaw hardcodes OPENAI_CODEX_BASE_URL = "https://chatgpt.com/backend-api" for the openai-codex-responses API path. This endpoint may handle tool definitions differently from the standard api.openai.com/v1/responses.

Key observations:

  1. No auth errors — OAuth token is valid, API calls succeed with usage data returned
  2. No function_call items — model only outputs text, never calls tools
  3. Rollout sessions work differently — sessions under workspace/sessions/ (e.g. auto-upgrade rollouts) DO produce function_call items via the same provider, suggesting tool injection may work in some code paths but not in the main agent Telegram session path
  4. Setting tools.exec.host: gateway does not help — the issue is upstream: the model never issues tool calls in the first place, so exec routing is irrelevant
  5. Fresh session (/new) exhibits the same behavior — not a context pollution issue

Steps to reproduce

  1. Configure openai-codex/gpt-5.3-codex as primary model with OAuth
  2. Connect via Telegram
  3. Send a message requiring tool use (e.g. "run ls /tmp" or "fetch this URL and summarize")
  4. Observe: agent responds with text promising to execute, but no tool call is made

Expected behavior

Model should emit function_call items for exec, web_search, etc., and the agent loop should execute them on the gateway and return function_call_output.

Workaround

Switch primary model to a non-Codex provider (e.g. google-gemini-cli/gemini-3-flash-preview or anthropic/claude-sonnet-4-6). Standard OpenAI models via api.openai.com Chat Completions also work correctly.

Possibly related

The chatgpt.com/backend-api endpoint may require different OAuth scopes (api.responses.write, model.request) or a different tool definition format than what OpenClaw currently sends. OpenAI's Codex CLI itself uses this same backend endpoint but may construct requests differently.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions