Bug type
Regression (worked before, now fails)
Beta release blocker
No
Summary
OpenClaw records full session context, but the actual OpenAI-compatible provider request is sent with only the current message.
This is not model-gateway, not Ollama Cloud, not the model “forgetting”. The prior messages are absent from the API request body before the model ever sees them.
Bug Report Draft
Title: OpenAI-compatible provider requests drop prior session history while trajectory shows full context
Environment:
- OpenClaw: 2026.4.30 (68c0109)
- Source SHA: 68c0109
- Node: v24.15.0
- Channel: Telegram
- Provider API: openai-completions
- Provider: OpenAI-compatible proxy/model-gateway
Impact:
- Agents lose conversational context across Telegram messages.
- Pronoun/coreference follow-ups fail.
- Tool continuation requests can become structurally invalid because tool results are sent without the preceding assistant tool call.
- The trajectory/debug UI is misleading because it shows full context that was not actually sent to the provider.
Relevant code path:
- /root/code/openclaw-main/src/agents/pi-embedded-runner/run/attempt.ts:2863 records prompt.submitted with full activeSession.messages.
- /root/code/openclaw-main/src/agents/pi-embedded-runner/run/attempt.ts:2876 then calls activeSession.prompt(...).
- /root/code/openclaw-main/node_modules/@mariozechner/pi-agent-core/dist/agent.js:215 should snapshot existing agent.state.messages.
- /root/code/openclaw-main/node_modules/@mariozechner/pi-agent-core/dist/agent-loop.js:44 should append the new prompt to that history.
- /root/code/openclaw-main/node_modules/@mariozechner/pi-agent-core/dist/agent-loop.js:142 sends llmContext.messages to the stream function.
- /root/code/openclaw-main/src/agents/openai-transport-stream.ts:1849 converts whatever context.messages it receives and places it in the OpenAI-compatible request.
Key conclusion:
The OpenAI transport appears to faithfully send the context.messages it receives. The failure is between the prompt.submitted trajectory snapshot and the actual low-level streamFn call: the full activeSession.messages visible to OpenClaw immediately before prompt submission is
not the context reaching the provider.
Suggested fix/test:
Add a regression test around embedded Telegram/session prompt submission that asserts the provider stream function receives the full prior transcript, not just the current prompt. The test should compare the messages recorded in prompt.submitted with the messages passed into the
OpenAI-compatible stream function for the same run.
Steps to reproduce
- Start a fresh OpenClaw agent with Telegram.
- Send: first off - there's a folder full of my projects at /root/code - these all live in github. They're my main focus
- Agent calls a tool and sees /root/code.
- Send: just note where they are - remember that. They'll be needed in future
- Agent replies that it has no context for what “they” means.
Expected behavior
The model request should include prior session messages, especially the earlier /root/code message, assistant tool call, and tool result.
Actual behavior
OpenClaw’s own trajectory for run 2bd93fe8-7c66-485e-9035-2d73fcf520a0 records prompt.submitted with messagesLen: 15, including the prior /root/code user message and tool result.
But the raw OpenAI-compatible request captured by model-gateway correlation id aef8c426 contains only:
messages: 2
0 system
1 user: "just note where they are - remember that. They'll be needed in future"
Nearby requests show the same pattern: only system + current item, including bare tool messages without the matching assistant tool call.
OpenClaw version
2026.4.30 (68c0109)
Operating system
ubuntu
Install method
pnpm
Model
glm-5.1
Provider / routing chain
openclaw -> in-house model-gateway -> glm
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response
Bug type
Regression (worked before, now fails)
Beta release blocker
No
Summary
OpenClaw records full session context, but the actual OpenAI-compatible provider request is sent with only the current message.
This is not model-gateway, not Ollama Cloud, not the model “forgetting”. The prior messages are absent from the API request body before the model ever sees them.
Bug Report Draft
Title: OpenAI-compatible provider requests drop prior session history while trajectory shows full context
Environment:
Impact:
Relevant code path:
Key conclusion:
The OpenAI transport appears to faithfully send the context.messages it receives. The failure is between the prompt.submitted trajectory snapshot and the actual low-level streamFn call: the full activeSession.messages visible to OpenClaw immediately before prompt submission is
not the context reaching the provider.
Suggested fix/test:
Add a regression test around embedded Telegram/session prompt submission that asserts the provider stream function receives the full prior transcript, not just the current prompt. The test should compare the messages recorded in prompt.submitted with the messages passed into the
OpenAI-compatible stream function for the same run.
Steps to reproduce
Expected behavior
The model request should include prior session messages, especially the earlier /root/code message, assistant tool call, and tool result.
Actual behavior
OpenClaw’s own trajectory for run 2bd93fe8-7c66-485e-9035-2d73fcf520a0 records prompt.submitted with messagesLen: 15, including the prior /root/code user message and tool result.
But the raw OpenAI-compatible request captured by model-gateway correlation id aef8c426 contains only:
messages: 2
0 system
1 user: "just note where they are - remember that. They'll be needed in future"
Nearby requests show the same pattern: only system + current item, including bare tool messages without the matching assistant tool call.
OpenClaw version
2026.4.30 (68c0109)
Operating system
ubuntu
Install method
pnpm
Model
glm-5.1
Provider / routing chain
openclaw -> in-house model-gateway -> glm
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response