Summary
The built-in microsoft-foundry provider can discover/select Anthropic Claude deployments from Azure AI Foundry, but it still normalizes them onto the OpenAI-compatible runtime path (/openai/v1, chat/completions or responses) instead of the Anthropic Foundry path (/anthropic/v1/messages).
This makes Claude deployments appear selectable in onboarding/config, but real requests fail at runtime (commonly 404).
Why this looks like a bug in the built-in provider
Microsoft's Claude-on-Foundry docs expect:
- base URL:
https://<resource>.services.ai.azure.com/anthropic
- request path:
/anthropic/v1/messages
- API shape: Anthropic Messages API
- auth:
- API key:
x-api-key
- Entra ID:
Authorization: Bearer <token>
- Anthropic header:
anthropic-version: 2023-06-01
But the built-in microsoft-foundry provider currently behaves like an OpenAI-compatible Foundry adapter:
dist/provider-CbANfJPO.js
capabilities: { providerFamily: "openai" }
dist/shared-Jirm7-bE.js
- Foundry API resolution only supports
openai-completions / openai-responses
- endpoint normalization/building rewrites to
.../openai/v1
dist/onboard-BZJYM71H.js
- connection test only calls
/chat/completions or /responses
dist/runtime-BEDfp2bA.js
- Entra token refresh is wired, but the rebuilt base URL still comes from the OpenAI-style Foundry base URL builder
So the provider can successfully:
- authenticate with Azure CLI / Entra ID
- list Foundry resources
- list deployments
- let the user select Claude deployments
but then still send the wrong protocol/route for Anthropic deployments.
Repro
- Run OpenClaw onboarding/configure.
- Select Microsoft Foundry.
- Choose Azure CLI login (
az login already completed).
- Select a Claude deployment from Azure AI Foundry, e.g.
claude-opus-4-6.
- Use that model in a real run.
Observed behavior
The configured model is routed through OpenAI-compatible Foundry endpoints instead of Anthropic Foundry endpoints, and requests fail (for example with 404).
Expected behavior
One of these should happen:
Option A: proper Anthropic support in microsoft-foundry
If the selected deployment is Anthropic/Claude, the provider should:
- support
api: "anthropic-messages"
- build base URL as
https://<resource>.services.ai.azure.com/anthropic
- call
/v1/messages
- send
anthropic-version: 2023-06-01
- use:
x-api-key for API key auth
Authorization: Bearer <token> for Entra ID auth
Option B: guardrail until support exists
If Anthropic deployments are not yet supported by the built-in provider, onboarding should not offer them as selectable deployments, or should clearly warn that only OpenAI-compatible Foundry APIs are currently supported.
Actual user impact
This is especially confusing because:
- the Azure CLI login flow works
- resource + deployment discovery works
- Claude deployments are shown as selectable
- the runtime failure only appears later when making actual requests
So it looks like configuration succeeded even though the provider/runtime protocol is wrong for that deployment family.
Related but distinct issue
Workaround
Current workaround is to use a custom provider pointed at the Anthropic Foundry endpoint instead of the built-in microsoft-foundry provider, e.g.:
- base URL:
https://<resource>.services.ai.azure.com/anthropic
- API:
anthropic-messages
API-key auth works most cleanly there today.
Summary
The built-in
microsoft-foundryprovider can discover/select Anthropic Claude deployments from Azure AI Foundry, but it still normalizes them onto the OpenAI-compatible runtime path (/openai/v1,chat/completionsorresponses) instead of the Anthropic Foundry path (/anthropic/v1/messages).This makes Claude deployments appear selectable in onboarding/config, but real requests fail at runtime (commonly 404).
Why this looks like a bug in the built-in provider
Microsoft's Claude-on-Foundry docs expect:
https://<resource>.services.ai.azure.com/anthropic/anthropic/v1/messagesx-api-keyAuthorization: Bearer <token>anthropic-version: 2023-06-01But the built-in
microsoft-foundryprovider currently behaves like an OpenAI-compatible Foundry adapter:dist/provider-CbANfJPO.jscapabilities: { providerFamily: "openai" }dist/shared-Jirm7-bE.jsopenai-completions/openai-responses.../openai/v1dist/onboard-BZJYM71H.js/chat/completionsor/responsesdist/runtime-BEDfp2bA.jsSo the provider can successfully:
but then still send the wrong protocol/route for Anthropic deployments.
Repro
az loginalready completed).claude-opus-4-6.Observed behavior
The configured model is routed through OpenAI-compatible Foundry endpoints instead of Anthropic Foundry endpoints, and requests fail (for example with 404).
Expected behavior
One of these should happen:
Option A: proper Anthropic support in
microsoft-foundryIf the selected deployment is Anthropic/Claude, the provider should:
api: "anthropic-messages"https://<resource>.services.ai.azure.com/anthropic/v1/messagesanthropic-version: 2023-06-01x-api-keyfor API key authAuthorization: Bearer <token>for Entra ID authOption B: guardrail until support exists
If Anthropic deployments are not yet supported by the built-in provider, onboarding should not offer them as selectable deployments, or should clearly warn that only OpenAI-compatible Foundry APIs are currently supported.
Actual user impact
This is especially confusing because:
So it looks like configuration succeeded even though the provider/runtime protocol is wrong for that deployment family.
Related but distinct issue
microsoft-foundryprovider selecting Anthropic deployments but routing them via the wrong API family.Workaround
Current workaround is to use a custom provider pointed at the Anthropic Foundry endpoint instead of the built-in
microsoft-foundryprovider, e.g.:https://<resource>.services.ai.azure.com/anthropicanthropic-messagesAPI-key auth works most cleanly there today.