Bug type
Regression (worked before, now fails)
Summary
Bug Description
When selecting a non-Anthropic model from the model dropdown in the dashboard,
the gateway receives the wrong provider prefix (anthropic/) instead of the
correct one.
Environment
- OpenClaw version: 2026.3.13 (61d171a)
- OS: macOS (Apple Silicon, M4 Mac mini)
- Install method: npm (sudo npm install -g openclaw)
Steps to Reproduce
- Configure both Anthropic and Google (Gemini) models in openclaw.json
- Open the dashboard (openclaw dashboard)
- Click the model dropdown
- Select google/gemini-2.5-flash
Expected Behavior
Gateway receives: google/gemini-2.5-flash
Actual Behavior
Gateway receives: anthropic/gemini-2.5-flash
Error Message
Failed to set model: GatewayRequestError: model not allowed: anthropic/gemini-2.5-flash
Additional Notes
- Same issue occurs with lmstudio/qwen3.5-4b-mlx → becomes anthropic/qwen3.5-4b-mlx
- The config file (openclaw.json) is correct with proper prefixes
- The models work correctly when called directly via API
- The bug appears to be in the dashboard UI which hardcodes anthropic/ prefix
regardless of the selected model's actual provider
Steps to reproduce
- Configure both Anthropic and Google (Gemini) models in openclaw.json
- Open the dashboard (openclaw dashboard)
- Click the model dropdown
- Select google/gemini-2.5-flash
Expected behavior
Gateway receives: google/gemini-2.5-flash
Actual behavior
Gateway receives: anthropic/gemini-2.5-flash
OpenClaw version
2026.3.13 (61d171a)
Operating system
macOS (Apple Silicon, M4 Mac mini)
Install method
npm (sudo npm install -g openclaw)
Model
google/gemini-2.5-flash
Provider / routing chain
OpenClaw dashboard UI → OpenClaw gateway (local, port 18789) → Google Generative AI API
Additional provider/model setup details
Google Gemini configured in openclaw.json under models.providers.google with api: "google-generative-ai".
Also reproduced with lmstudio/qwen3.5-4b-mlx (local LM Studio model).
Dashboard sends anthropic/ prefix for all non-Anthropic models.
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response
Bug type
Regression (worked before, now fails)
Summary
Bug Description
When selecting a non-Anthropic model from the model dropdown in the dashboard,
the gateway receives the wrong provider prefix (anthropic/) instead of the
correct one.
Environment
Steps to Reproduce
Expected Behavior
Gateway receives: google/gemini-2.5-flash
Actual Behavior
Gateway receives: anthropic/gemini-2.5-flash
Error Message
Failed to set model: GatewayRequestError: model not allowed: anthropic/gemini-2.5-flash
Additional Notes
regardless of the selected model's actual provider
Steps to reproduce
Expected behavior
Gateway receives: google/gemini-2.5-flash
Actual behavior
Gateway receives: anthropic/gemini-2.5-flash
OpenClaw version
2026.3.13 (61d171a)
Operating system
macOS (Apple Silicon, M4 Mac mini)
Install method
npm (sudo npm install -g openclaw)
Model
google/gemini-2.5-flash
Provider / routing chain
OpenClaw dashboard UI → OpenClaw gateway (local, port 18789) → Google Generative AI API
Additional provider/model setup details
Google Gemini configured in openclaw.json under models.providers.google with api: "google-generative-ai".
Also reproduced with lmstudio/qwen3.5-4b-mlx (local LM Studio model).
Dashboard sends anthropic/ prefix for all non-Anthropic models.
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response