Key load balancing, provider fallback, and cost-aware routing — all from the edge, behind one API.
Drop in your existing OpenAI client with a new base URL. No code changes beyond configuration.
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://anyrouter.dev/api/v1",
apiKey: "sk-ar-...",
});
const completion = await openai.chat.completions.create({
model: "google/gemini-3.1-flash-lite",
messages: [
{ role: "user", content: "Hello!" }
],
});
console.log(completion.choices[0].message);Everything teams need to route AI reliably without changing their existing app structure.
POST to /api/v1/chat/completions with any OpenAI-compatible client.
POST to /api/v1/messages for Anthropic-format requests.
Structured outputs, tools, and agent-friendly envelopes for modern AI workflows.
Compare routes, context windows, and per-token pricing across all providers.
Automatic provider failover when a model is down or rate-limited. Zero downtime.
Route traffic based on cost. Set preferences to balance quality and spend automatically.
Browse the latest routes with clear provider identity and fast access to Open Models.
qwen/qwen3-embedding-0.6bmoonshotai/kimi-k2.6google/gemini-3.1-flash-litegoogle/gemma-4-26b-a4b-itShip with one API, one dashboard, and one edge. Keep your current integrations, switch to base URL, and route traffic with fallback built in.
curl https://anyrouter.dev/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-ar-..." \
-d '{
"model": "google/gemini-3.1-flash-lite",
"messages": [{"role": "user", "content": "Hello!"}]
}'