You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add Gemini-compatible /v1beta/models endpoints for local gemini-cli usage, including generateContent, streamGenerateContent, and countTokens
Expose the full upstream model list on the Gemini /v1beta/models surface instead of limiting the listing to a small allowlist
Add smart fallback routing between /v1/chat/completions and /v1/responses, so requests can still work when a model only supports one of the two OpenAI-compatible endpoints
Improve OpenAI request conversion compatibility across the two endpoints, including better handling for system instructions, structured output, tool choice, reasoning state, and previous_response_id
Improve Claude Code native /v1/messages compatibility by removing unsupported passthrough fields before forwarding requests upstream
Add AmpCode support: chat completions via /amp/v1/* and /api/provider/* route through Copilot API; management routes (/api/*) and login redirects reverse-proxy to ampcode.com
Bug Fixes
Fix Docker image crash (exec /copilot2api: no such file or directory) caused by dynamically-linked binary in scratch image — add CGO_ENABLED=0 to CI cross-compilation
Fix Docker multi-arch build: arm64 image was shipping the amd64 binary