Bug type
Behavior bug (incorrect output/state without crash)
Environment
- OpenClaw version: 2026.4.9 (0512059)
- OS: Ubuntu 24.04 (EC2)
- Model:
openai-codex/gpt-5.3-codex (primary), OAuth mode
- Channel: Telegram
- Config:
tools.exec.host: gateway
Summary
When using openai-codex/gpt-5.3-codex as the primary model via OAuth, the agent never emits function_call items in main agent sessions (Telegram DM). Every response contains only text content with stopReason: "stop". The agent verbally promises to execute tasks but never calls any tools (exec, process, web_search, etc.).
Evidence from session logs
4 user messages, 4 assistant responses, 0 tool calls:
ASSISTANT api=openai-codex-responses model=gpt-5.3-codex stop=stop
content_types=["text"] ← text only, zero function_call
usage: in=17562 out=26 cache_r=0
ASSISTANT api=openai-codex-responses model=gpt-5.3-codex stop=stop
content_types=["text"]
usage: in=368 out=80 cache_r=17408
Input token count is suspiciously small (~18K total context). If tool definitions were properly injected, tool schemas alone would add several thousand tokens. This strongly suggests tools are not being sent in the API request payload.
Root cause analysis
OpenClaw hardcodes OPENAI_CODEX_BASE_URL = "https://chatgpt.com/backend-api" for the openai-codex-responses API path. This endpoint may handle tool definitions differently from the standard api.openai.com/v1/responses.
Key observations:
- No auth errors — OAuth token is valid, API calls succeed with usage data returned
- No
function_call items — model only outputs text, never calls tools
- Rollout sessions work differently — sessions under
workspace/sessions/ (e.g. auto-upgrade rollouts) DO produce function_call items via the same provider, suggesting tool injection may work in some code paths but not in the main agent Telegram session path
- Setting
tools.exec.host: gateway does not help — the issue is upstream: the model never issues tool calls in the first place, so exec routing is irrelevant
- Fresh session (
/new) exhibits the same behavior — not a context pollution issue
Steps to reproduce
- Configure
openai-codex/gpt-5.3-codex as primary model with OAuth
- Connect via Telegram
- Send a message requiring tool use (e.g. "run
ls /tmp" or "fetch this URL and summarize")
- Observe: agent responds with text promising to execute, but no tool call is made
Expected behavior
Model should emit function_call items for exec, web_search, etc., and the agent loop should execute them on the gateway and return function_call_output.
Workaround
Switch primary model to a non-Codex provider (e.g. google-gemini-cli/gemini-3-flash-preview or anthropic/claude-sonnet-4-6). Standard OpenAI models via api.openai.com Chat Completions also work correctly.
Possibly related
The chatgpt.com/backend-api endpoint may require different OAuth scopes (api.responses.write, model.request) or a different tool definition format than what OpenClaw currently sends. OpenAI's Codex CLI itself uses this same backend endpoint but may construct requests differently.
Bug type
Behavior bug (incorrect output/state without crash)
Environment
openai-codex/gpt-5.3-codex(primary), OAuth modetools.exec.host: gatewaySummary
When using
openai-codex/gpt-5.3-codexas the primary model via OAuth, the agent never emitsfunction_callitems in main agent sessions (Telegram DM). Every response contains onlytextcontent withstopReason: "stop". The agent verbally promises to execute tasks but never calls any tools (exec,process,web_search, etc.).Evidence from session logs
4 user messages, 4 assistant responses, 0 tool calls:
Input token count is suspiciously small (~18K total context). If tool definitions were properly injected, tool schemas alone would add several thousand tokens. This strongly suggests tools are not being sent in the API request payload.
Root cause analysis
OpenClaw hardcodes
OPENAI_CODEX_BASE_URL = "https://chatgpt.com/backend-api"for theopenai-codex-responsesAPI path. This endpoint may handle tool definitions differently from the standardapi.openai.com/v1/responses.Key observations:
function_callitems — model only outputs text, never calls toolsworkspace/sessions/(e.g. auto-upgrade rollouts) DO producefunction_callitems via the same provider, suggesting tool injection may work in some code paths but not in the main agent Telegram session pathtools.exec.host: gatewaydoes not help — the issue is upstream: the model never issues tool calls in the first place, so exec routing is irrelevant/new) exhibits the same behavior — not a context pollution issueSteps to reproduce
openai-codex/gpt-5.3-codexas primary model with OAuthls /tmp" or "fetch this URL and summarize")Expected behavior
Model should emit
function_callitems forexec,web_search, etc., and the agent loop should execute them on the gateway and returnfunction_call_output.Workaround
Switch primary model to a non-Codex provider (e.g.
google-gemini-cli/gemini-3-flash-previeworanthropic/claude-sonnet-4-6). Standard OpenAI models viaapi.openai.comChat Completions also work correctly.Possibly related
The
chatgpt.com/backend-apiendpoint may require different OAuth scopes (api.responses.write,model.request) or a different tool definition format than what OpenClaw currently sends. OpenAI's Codex CLI itself uses this same backend endpoint but may construct requests differently.