openai-responses adapter is unusable against Azure OpenAI: every turn returns a synthetic refusal (openai-completions works)
Severity: Blocks all Azure OpenAI users on the Responses adapter.
Summary
Pointing OpenClaw at Azure OpenAI's OpenAI-v1 surface with models[].api: "openai-responses" causes every user message to come back as "I'm sorry, but I cannot assist with that request." with usage.totalTokens == 0 (the model never runs — Azure short-circuits the request). Switching the same model to "openai-completions" makes it work immediately. The trigger is OpenClaw's Sender (untrusted metadata): envelope: Azure's prompt-stage shield on the Responses surface classifies it as Indirect Prompt Injection and refuses, even with a custom RAI policy that disables Jailbreak + Indirect Attack.
Why it matters
- Azure OpenAI is a first-class deployment target. Today the Responses adapter is silently broken there — the gateway boots, the model is configured, but every reply is a synthetic refusal. There is no error in logs and no clear failure signal; users assume the model or their prompt is the problem.
- The two adapters (
openai-responses vs openai-completions) are presented as interchangeable in config. They aren't on Azure. This is a footgun.
Repro (5 minutes)
Send hello. The session jsonl shows:
{
"role": "assistant",
"content": [{"type": "text", "text": "I'm sorry, but I cannot assist with that request."}],
"usage": {"input": 0, "output": 0, "totalTokens": 0},
"stopReason": "stop"
}
Flip to "api": "openai-completions". Send hello → normal greeting.
Confirmed not a model / policy issue
A direct call to the same Azure deployment at /openai/v1/responses with the bare envelope returns HTTP 200 + usage.input_tokens=65 + "Hello! How can I help you today?". The shield only fires on the payload OpenClaw assembles for the Responses adapter (system prompt + ## Inbound Context (trusted metadata) + fenced Sender (untrusted metadata): user block + tool catalog + injected workspace files).
Source pointer
Likely culprit: src/auto-reply/reply/inbound-meta.ts — buildInboundUserContextPrefix emits a fenced Sender (untrusted metadata): JSON block on every user turn, paired with a ## Inbound Context (trusted metadata) system block from buildInboundMetaSystemPrompt. On Azure OpenAI's Responses surface, this two-block fingerprint trips an undocumented prompt-stage shield that fires even when the visible Indirect Attack RAI filter is set non-blocking. The same envelope works on openai-completions and on direct AOAI Responses calls without the system-block pairing.
Suggested fixes (any one unblocks Azure users)
- Add
agents.defaults.senderEnvelope: "off" config to opt out of the Sender (untrusted metadata) wrapper for single-tenant trusted-channel deployments (e.g. WebChat-only behind Easy Auth) where there is no untrusted sender.
- On the Responses adapter, move sender metadata out of user-content (e.g. into the Responses API
metadata field or a non-text item) so it doesn't read as injection-shaped.
- Document Azure OpenAI compatibility and recommend
openai-completions until 1 or 2 lands.
Environment
- OpenClaw: latest (
npm install -g openclaw@latest, image built 2026-04-24)
- Azure OpenAI:
gpt-5-mini 2025-08-07 GlobalStandard, OpenAI-v1 surface, managed-identity auth
- Runtime: Node 24 / Linux / Azure Container Apps
openai-responsesadapter is unusable against Azure OpenAI: every turn returns a synthetic refusal (openai-completionsworks)Severity: Blocks all Azure OpenAI users on the Responses adapter.
Summary
Pointing OpenClaw at Azure OpenAI's OpenAI-v1 surface with
models[].api: "openai-responses"causes every user message to come back as"I'm sorry, but I cannot assist with that request."withusage.totalTokens == 0(the model never runs — Azure short-circuits the request). Switching the same model to"openai-completions"makes it work immediately. The trigger is OpenClaw'sSender (untrusted metadata):envelope: Azure's prompt-stage shield on the Responses surface classifies it as Indirect Prompt Injection and refuses, even with a custom RAI policy that disables Jailbreak + Indirect Attack.Why it matters
openai-responsesvsopenai-completions) are presented as interchangeable in config. They aren't on Azure. This is a footgun.Repro (5 minutes)
Send
hello. The session jsonl shows:{ "role": "assistant", "content": [{"type": "text", "text": "I'm sorry, but I cannot assist with that request."}], "usage": {"input": 0, "output": 0, "totalTokens": 0}, "stopReason": "stop" }Flip to
"api": "openai-completions". Sendhello→ normal greeting.Confirmed not a model / policy issue
A direct call to the same Azure deployment at
/openai/v1/responseswith the bare envelope returns HTTP 200 +usage.input_tokens=65+"Hello! How can I help you today?". The shield only fires on the payload OpenClaw assembles for the Responses adapter (system prompt +## Inbound Context (trusted metadata)+ fencedSender (untrusted metadata):user block + tool catalog + injected workspace files).Source pointer
Likely culprit:
src/auto-reply/reply/inbound-meta.ts—buildInboundUserContextPrefixemits a fencedSender (untrusted metadata):JSON block on every user turn, paired with a## Inbound Context (trusted metadata)system block frombuildInboundMetaSystemPrompt. On Azure OpenAI's Responses surface, this two-block fingerprint trips an undocumented prompt-stage shield that fires even when the visibleIndirect AttackRAI filter is set non-blocking. The same envelope works onopenai-completionsand on direct AOAI Responses calls without the system-block pairing.Suggested fixes (any one unblocks Azure users)
agents.defaults.senderEnvelope: "off"config to opt out of theSender (untrusted metadata)wrapper for single-tenant trusted-channel deployments (e.g. WebChat-only behind Easy Auth) where there is no untrusted sender.metadatafield or a non-text item) so it doesn't read as injection-shaped.openai-completionsuntil 1 or 2 lands.Environment
npm install -g openclaw@latest, image built 2026-04-24)gpt-5-mini2025-08-07 GlobalStandard, OpenAI-v1 surface, managed-identity auth