Python and TypeScript SDKs — plus LangChain, LlamaIndex, React, and Vercel AI SDK integrations — for Agent Command Center, Future AGI's open-source, OpenAI-compatible AI gateway.
Quickstart · Gateway repo · Docs · Try Cloud (Free) · Discord · Discussions
This repo ships the client SDKs for the Agent Command Center. The gateway itself — the Go service that handles routing, caching, guardrails, cost tracking, and the OpenAI-compatible endpoint — lives in the future-agi/future-agi platform monorepo. These SDKs give you typed Python and TypeScript clients, plus integrations for LangChain, LlamaIndex, React, and the Vercel AI SDK.
If you already know OpenAI's SDK, you already know these. Swap OpenAI for AgentCC, point base_url at the gateway, and every gateway feature — multi-provider routing, semantic caching, inline guardrails, per-key budgets — is available through the same call.
| Package | Runtime | Install | Purpose |
|---|---|---|---|
agentcc |
Python 3.9+ | pip install agentcc |
Core Python client — sync + async, streaming, tools, structured output |
@agentcc/client |
Node 18+ | npm install @agentcc/client |
Core TypeScript client — ESM + CJS, fully typed |
@agentcc/langchain |
Node 18+ | npm install @agentcc/langchain |
Drop-in ChatOpenAI replacement for LangChain.js |
@agentcc/llamaindex |
Node 18+ | npm install @agentcc/llamaindex |
LLM + embedding integration for LlamaIndex.TS |
@agentcc/react |
Node 18+ | npm install @agentcc/react |
React context + hooks for chat UIs |
@agentcc/vercel |
Node 18+ | npm install @agentcc/vercel |
Vercel AI SDK provider |
- OpenAI-compatible surface. Chat, completions, embeddings, images, audio, moderations, files, batches, rerank, responses — same shape as OpenAI's SDK. Migrating is a one-line change.
- 100+ providers through one endpoint. OpenAI, Anthropic, Google, Vertex AI, Bedrock, Azure, Groq, Together, Mistral, Fireworks, Ollama, vLLM — whatever you pick, your SDK call doesn't change.
- 15 routing strategies, surfaced per request. Fallback chains, shadow traffic, latency-aware routing, cost-optimised selection, circuit breakers — configured via a typed
configoption or gateway-side virtual keys. - Streaming, tool calling, structured output. Iterator patterns in Python, async iterables in TypeScript — both fully typed.
- Inline guardrails + cost tracking. Every request can carry a guardrail policy and a budget header. The gateway enforces both; the SDK surfaces the results.
- Framework integrations that don't rewrap.
@agentcc/langchainis a genuineBaseChatModel.@agentcc/vercelis a real AI SDK provider.@agentcc/llamaindeximplements LlamaIndex'sLLMandBaseEmbedding. Use them the way you use the originals.
Get a key at app.futureagi.com (free tier available) or self-host the gateway — then:
|
Python pip install agentccimport os
from agentcc import AgentCC
client = AgentCC(
api_key=os.environ["AGENTCC_API_KEY"],
base_url="https://gateway.futureagi.com/v1",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content) |
TypeScript npm install @agentcc/clientimport { AgentCC } from "@agentcc/client";
const client = new AgentCC({
apiKey: process.env.AGENTCC_API_KEY,
baseUrl: "https://gateway.futureagi.com/v1",
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content); |
Set AGENTCC_API_KEY and AGENTCC_BASE_URL in your environment and both clients pick them up automatically.
Everything the gateway supports — routing strategies, caching, guardrails, budgets — is available per request via a config option. No separate API to learn.
import { AgentCC, type GatewayConfig } from "@agentcc/client";
const config: GatewayConfig = {
strategy: "fallback",
targets: [
{ provider: "openai", model: "gpt-4o" },
{ provider: "anthropic", model: "claude-sonnet-4-20250514" },
],
};
const client = new AgentCC({
apiKey: process.env.AGENTCC_API_KEY,
baseUrl: "https://gateway.futureagi.com/v1",
config,
});Gateway docs → · Routing strategies → · Guardrails →
| Package | What you get |
|---|---|
@agentcc/langchain |
ChatAgentCC (drop-in ChatOpenAI), AgentCCEmbeddings, AgentCCCallbackHandler for unified observability across LangChain + gateway |
@agentcc/llamaindex |
AgentCCLLM, AgentCCEmbedding — pass them to any LlamaIndex pipeline that accepts an LLM or embedding model |
@agentcc/react |
AgentCCProvider, useAgentCCChat (streaming), useAgentCCCompletion, useAgentCCObject (structured output) |
@agentcc/vercel |
createAgentCC() provider for generateText / streamText — tools, structured output, and multi-step loops pass through |
Each integration has its own README with full examples.
These SDKs are one slice of the Future AGI platform — an open-source stack for making AI agents reliable. You can use them standalone against any Agent Command Center deployment, or alongside the rest.
| Repo | What it is |
|---|---|
future-agi/future-agi |
Platform monorepo — the gateway itself, evaluations, simulations, tracing, prompt optimization |
future-agi/traceAI |
OpenTelemetry-native instrumentation for 50+ AI frameworks |
future-agi/ai-evaluation |
50+ evaluation metrics + guardrail scanners |
future-agi/agent-opt |
Six prompt-optimization algorithms (GEPA, PromptWizard, and more) |
- Python 3.9+ — for
agentcc - Node 18+ — for all
@agentcc/*packages - An Agent Command Center endpoint — either Future AGI Cloud (free tier) or a self-hosted gateway
- Full docs — product overview, concepts, tutorials
- Python SDK reference
- TypeScript SDK reference
- Gateway docs — routing, caching, guardrails, budgets
- Cookbook — end-to-end recipes
Contributions welcome — bug fixes, new framework integrations, examples, docs improvements, anything.
- Browse
good first issue - Read the Contributing Guide
- Say hi on Discord or Discussions
Security reports: see SECURITY.md.
| 💬 Discord | Real-time help from the team and community |
| 🗨️ GitHub Discussions | Ideas, questions, roadmap input |
| 🐦 Twitter / X | Release announcements |
| 📝 Blog | Engineering & research posts |
| 📧 support@futureagi.com | Cloud account / billing |
| 🔐 security@futureagi.com | Private vulnerability disclosure (24 h ack — see SECURITY.md) |
Apache License 2.0 — see LICENSE.
Made by the Future AGI team and contributors.
