$5 free credit on signup · No card required to start · Get a key →
api.anthropic.com · stable · v1

The Claude API.
Built for builders.

Every Claude model - Opus, Sonnet, Haiku - through one stable interface. Tool use, vision, batch, streaming, and prompt caching, in three official SDKs. $5 free credit when you sign up.

60-second signup $5 free credit 3 SDKs · 1 shape
import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{
        "role": "user",
        "content": "Hello, Claude!"
    }]
)

print(response.content[0].text)
# → "Hello! How can I help today?"
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

const response = await client.messages.create({
  model: "claude-sonnet-4-6",
  max_tokens: 1024,
  messages: [{
    role: "user",
    content: "Hello, Claude!"
  }]
});

console.log(response.content[0].text);
$ curl https://api.anthropic.com/v1/messages \
   -H "x-api-key: $ANTHROPIC_API_KEY" \
   -H "anthropic-version: 2023-06-01" \
   -H "content-type: application/json" \
   -d '{
     "model": "claude-sonnet-4-6",
     "max_tokens": 1024,
     "messages": [
       {"role": "user", "content": "Hello, Claude!"}
     ]
   }'
One shape. Every language. Every model. All SDKs →
// quickstart

From signup to first call: three steps.

No setup wizard. No CLI install. Generate a key, set an env var, paste five lines of code. Most people are streaming tokens in under five minutes.

— 01 · Sign up

Get a key.

Sign in at console.anthropic.com with email or Google. New accounts get $5 in credit, no payment method required.

# Free $5 credit on signup
console.anthropic.com
— 02 · Install

One SDK.

Pick Python, TypeScript, or use cURL directly. Community libraries exist for Go, Java, Ruby, Rust, and more.

# Pick your language
$ pip install anthropic
— 03 · Call

Five lines.

Set the env var, instantiate the client, send a message. The shape stays identical across every Claude model and every SDK.

# It's really this short
$ python first_call.py
// capabilities

One API. Every primitive you need.

Modern AI workloads need more than text-in, text-out. The Claude API ships the primitives engineers actually reach for - and they all work together.

Streaming

Server-sent events stream tokens as they're generated. Time-to-first-token in the low hundreds of milliseconds.

stream: true
🔧

Tool use

Function calling with structured args. Define tools as JSON schemas; Claude picks, fills, and chains them.

tools: [...]
👁

Vision

Image inputs natively — PDFs, screenshots, charts, diagrams. Same models, same endpoint, more context.

type: "image"
📦

Batch API

Async processing for large workloads at half the cost. Ideal for backfills and overnight pipelines.

— 50% cost

Prompt caching

Cache static prompt prefixes (system messages, tool defs, RAG context) for up to 90% cost savings.

— 90% cost
{ }

Structured output

Reliable JSON via tool use, plus prefill techniques and response_format hints. No more regex hacks.

strict JSON
🧩

MCP integration

Native support for the Model Context Protocol. Plug Claude into your existing tool ecosystem in one config.

mcp_servers: [...]
📚

200K context

200,000-token context window across all current models. Drop in entire codebases, books, or RAG corpora.

200,000 tokens
// models & pricing

Three tiers. Same shape.

Swap models by changing one string. Same SDK, same endpoints, same response format - pick the intelligence you need at the price you want.

Model API string Input ($ / M) Output ($ / M) Best for
Opus 4.7 Newest claude-opus-4-7 $15.00 $75.00 Hardest reasoning, agents
Sonnet 4.6 Default claude-sonnet-4-6 $3.00 $15.00 Most everyday tasks
Haiku 4.5 Fastest claude-haiku-4-5 $0.25 $1.25 High volume, low latency
50%
Batch API discount

Cut input + output cost in half on async workloads — ideal for backfills, evals, and overnight jobs.

90%
Prompt caching savings

Cache long system prompts, tool defs, or RAG context. Repeated calls cost a fraction of full-price input.

$5
Free credit on signup

Enough for thousands of Haiku calls or hundreds of Sonnet calls. Spend it however you want.

Generate a key.
First call in five minutes.

Sign in, click "Create Key," paste it into your env. Whatever you ship next is one Anthropic SDK call away from being smarter.

Get API Key $5 free credit · 60-second signup
// sdks

Three official SDKs. One call signature.

First-party libraries for Python, TypeScript, and direct HTTP via cURL. Streaming, tool use, vision, and batch — all supported identically across each.

Python

PIP
# Install
$ pip install anthropic

# Authenticate
$ export ANTHROPIC_API_KEY="sk-ant-..."

# First call
import anthropic
client = anthropic.Anthropic()
msg = client.messages.create(
  model="claude-sonnet-4-6",
  max_tokens=512,
  messages=[...])

TypeScript

NPM
// Install
$ npm i @anthropic-ai/sdk

// Authenticate
$ export ANTHROPIC_API_KEY="sk-ant-..."

// First call
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const msg = await client.messages.create({
  model: "claude-sonnet-4-6",
  max_tokens: 512,
  messages: [...]
});

cURL / HTTP

REST
# Direct HTTP — any language
$ curl https://api.anthropic.com/v1/messages \
  -H "x-api-key: $ANTHROPIC_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-sonnet-4-6",
    "max_tokens": 512,
    "messages": [...]
  }'
// what to build

What people actually ship.

A snapshot of the workloads developers send through the Claude API every day. Each one is a real production pattern, not a "hello world."

// stack-up

Claude API vs. the alternatives.

No drama, no FUD. Here's a clear-eyed look at how the Claude API stacks up against the other major LLM providers — focused on what API users actually care about.

Claude API OpenAI API Gemini API
Free credit on signup Limited / promo Free tier
Context window (top tier) ~200K (1M-context) 1M+
Tool use / function calling
Vision (image input)
Batch API discount 50% off Limited
Prompt caching Auto, ~50% Yes
MCP support No No
Image generation DALL·E Imagen
Strongest for Breadth + plugins Multimodal + 1M context

Pricing and feature parity move fast. Verify current rates at anthropic.com/pricing before committing.

// faq

Common API questions.

The questions teams ask most often before they cut over a workload.

How fast can I get from signup to first response?

Roughly five minutes. Sign in at console.anthropic.com, generate a key, run pip install anthropic, paste five lines of code, hit run. The hardest step is usually deciding which model to pass.

What's the free credit and how far does it go?

$5 on signup, no payment method required to start. On Haiku 4.5 that's roughly 20 million input tokens — enough for thousands of small calls. On Sonnet it's about 1.6M input tokens. On Opus it's smaller but plenty to evaluate.

Is the API stable? What's the versioning story?

The Messages API is stable on v1. Models are pinned by API string (claude-sonnet-4-6, claude-opus-4-7, etc.) so you control upgrades. Anthropic publishes a deprecation timeline for older models with long sunset windows.

Can I switch models without rewriting code?

Yes — that's the design. Change the model string in your call. Same request shape, same response shape, same SDK. Most teams ship A/B comparisons across Haiku / Sonnet / Opus without touching anything else.

Does the API support streaming?

Yes — pass stream: true (TS) or use messages.stream() (Python) and you get a server-sent event stream of incremental text. Time-to-first-token is typically a few hundred milliseconds, depending on prompt length.

What about tool use, JSON output, and structured data?

Full tool-use (function calling) is supported with JSON-schema input definitions. For strict JSON output, define a tool with the schema you want and let Claude call it. Tool use docs →

How do prompt caching and the Batch API work together?

Prompt caching saves up to 90% on repeated context (system prompts, tool defs, large RAG context). The Batch API gives 50% off all input/output. They stack — async workloads with stable system prompts can run at a fraction of nominal cost.

Will my API data be used to train models?

By default, Anthropic does not use API customer data to train its generative models. Enterprise plans support zero data retention. Full details at anthropic.com/privacy.

What are the rate limits?

Limits are tier-based and scale automatically as you spend more. New accounts start with reasonable defaults sufficient for development and small production workloads. For higher limits or guaranteed throughput, see Scale Plan / Enterprise.

What languages have official SDKs?

Python and TypeScript are first-party. Anthropic also publishes SDKs for Java and Go, with active community libraries for Ruby, Rust, .NET, PHP, and others. Direct HTTP via cURL works from anywhere.

Is there a status page?

Yes — status.anthropic.com tracks API uptime, incidents, and degraded regions. Subscribe via RSS or email for production monitoring.

Five lines.
Whatever you build next.

Generate a key. Paste the snippet. Watch tokens stream back. The whole loop takes less time than reading this page — and the next thing you ship gets noticeably smarter.

$ pip install anthropic · $ npm i @anthropic-ai/sdk · $5 free credit · no card

$ Get API Key →