Bug type
Crash (process/app exits or hangs)
Beta release blocker
No
Summary
With codex/gpt-5.4 on OpenClaw 2026.4.12, the backend can finish and write the assistant reply, but TUI/webchat can remain in pondering for tens of seconds before rendering the final response.
Steps to reproduce
- Start OpenClaw 2026.4.12 with
codex/gpt-5.4 as the default model.
- Open a fresh TUI or webchat session.
- Send a trivial prompt such as
testing, wait for the first reply, then send a second trivial prompt.
- Observe that the UI can remain in
pondering for about 39 seconds after the backend has already appended the assistant reply to the session file.
Expected behavior
When the backend finishes a Codex turn, the TUI/webchat should render the final reply immediately instead of continuing to show pondering.
Actual behavior
In live PTY testing, the assistant reply was already written to the session file, but the TUI continued showing pondering for about 39 seconds before I interrupted it. On the same install, gateway startup also logged startup model warmup failed for codex/gpt-5.4: Error: Unknown model: codex/gpt-5.4.
OpenClaw version
2026.4.12
Operating system
Ubuntu 24.04.4 LTS on WSL
Install method
npm global
Model
codex/gpt-5.4
Provider / routing chain
openclaw -> codex app-server -> codex/gpt-5.4
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Observed evidence from local testing:
- Gateway startup log: `startup model warmup failed for codex/gpt-5.4: Error: Unknown model: codex/gpt-5.4`
- In a live Codex TUI test, the backend had already written the assistant reply to the session file, but the frontend still showed `pondering` for about 39 seconds.
Relevant local session file:
- `~/.openclaw/agents/main/sessions/sessionID.jsonl`
Relevant local gateway journal:
- `journalctl --user -u openclaw-gateway`
Impact and severity
Affected: TUI/webchat users running codex/gpt-5.4 on OpenClaw 2026.4.12
Severity: High
Frequency: Reproduced across multiple fresh Codex sessions before local patching
Consequence: Interactive Codex sessions appear hung or much slower than the backend completion time
Additional information
A local runtime patch that registered the started agent run id in chat.send removed the delayed terminal event behavior in my install. A separate local patch that stopped skipping provider runtime hooks removed the Unknown model: codex/gpt-5.4 warmup log on startup
Bug type
Crash (process/app exits or hangs)
Beta release blocker
No
Summary
With codex/gpt-5.4 on OpenClaw 2026.4.12, the backend can finish and write the assistant reply, but TUI/webchat can remain in pondering for tens of seconds before rendering the final response.
Steps to reproduce
codex/gpt-5.4as the default model.testing, wait for the first reply, then send a second trivial prompt.ponderingfor about 39 seconds after the backend has already appended the assistant reply to the session file.Expected behavior
When the backend finishes a Codex turn, the TUI/webchat should render the final reply immediately instead of continuing to show
pondering.Actual behavior
In live PTY testing, the assistant reply was already written to the session file, but the TUI continued showing
ponderingfor about 39 seconds before I interrupted it. On the same install, gateway startup also loggedstartup model warmup failed for codex/gpt-5.4: Error: Unknown model: codex/gpt-5.4.OpenClaw version
2026.4.12
Operating system
Ubuntu 24.04.4 LTS on WSL
Install method
npm global
Model
codex/gpt-5.4
Provider / routing chain
openclaw -> codex app-server -> codex/gpt-5.4
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Impact and severity
Affected: TUI/webchat users running
codex/gpt-5.4on OpenClaw 2026.4.12Severity: High
Frequency: Reproduced across multiple fresh Codex sessions before local patching
Consequence: Interactive Codex sessions appear hung or much slower than the backend completion time
Additional information
A local runtime patch that registered the started agent run id in
chat.sendremoved the delayed terminal event behavior in my install. A separate local patch that stopped skipping provider runtime hooks removed theUnknown model: codex/gpt-5.4warmup log on startup