PostgreSQL Storage
Use PostgreSQL for teams and production deployments where multiple users or processes write to the same store.
When to Use PostgreSQL
SQLite works well for single-user, single-machine setups. PostgreSQL is the right choice when:
- Multiple users — Several developers running
tapes start claudeagainst the same database - Multiple processes — CI agents and local proxies writing concurrently
- Remote access — Proxy instances on different machines sharing one store
SQLite allows only 1 write transaction at a time. Multiple proxy instances queue behind each other, and running SQLite over a network filesystem risks lock failures and data corruption. PostgreSQL handles hundreds of concurrent writers over the network by design.
Quick Start
Pass a PostgreSQL connection string to any serve command:
tapes serve --postgres "postgres://user:pass@localhost:5432/tapes" Tapes runs schema migrations automatically on startup. No manual setup required.
Local PostgreSQL
1. Start PostgreSQL
docker run -d --name tapes-postgres \
-e POSTGRES_USER=tapes \
-e POSTGRES_PASSWORD=tapes \
-e POSTGRES_DB=tapes \
-p 5432:5432 \
postgres:17 2. Start Tapes
tapes serve --postgres "postgres://tapes:tapes@localhost:5432/tapes" 3. Verify
curl http://localhost:8081/ping Remote PostgreSQL
Use a hosted PostgreSQL provider for team-wide shared storage. Any PostgreSQL-compatible service works:
- Supabase — Managed Postgres with built-in auth and dashboard
- Neon — Serverless Postgres with branching and autoscaling
- PlanetScale — High-availability Postgres on NVMe storage with database branching
- AWS RDS — Self-managed Postgres on AWS infrastructure
Example: Supabase
- Create a project at
supabase.com - Go to Project Settings > Database and copy the connection string
- Start tapes with the remote DSN:
tapes serve \
--postgres "postgres://postgres.[project-ref]:[password]@aws-0-us-east-1.pooler.supabase.com:6543/postgres" \
--provider anthropic \
--upstream "https://api.anthropic.com" Each team member runs their own proxy instance pointing to the same remote database. All conversations are visible to everyone.
If your frontend runs on a hosted platform (Vercel, Netlify, etc.), the Tapes proxy must also be running somewhere accessible over the network — localhost won't work. You can deploy the proxy to any cloud provider that runs containers. A hosted Tapes Cloud service is in development that will remove this requirement.
Config File
Save the connection string in .tapes/config.toml so you don't need the flag every time:
# .tapes/config.toml
[storage]
postgres_dsn = "postgres://user:pass@host:5432/tapes?sslmode=require" CLI flags override config file values. Use tapes config set storage.postgres_dsn <dsn> to set it without editing the file directly.
Vercel AI SDK Integration
Route Vercel AI SDK requests through tapes to capture conversations in PostgreSQL. Point the provider's baseURL at your tapes proxy. For a complete working example, see tapes-ai-sdk-example (opens in new tab).
import { createAnthropic } from '@ai-sdk/anthropic';
import { generateText } from 'ai';
const anthropic = createAnthropic({
baseURL: 'http://localhost:8080',
});
const { text } = await generateText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Explain merkle trees in one paragraph.',
}); For OpenAI models:
import { createOpenAI } from '@ai-sdk/openai';
const openai = createOpenAI({
baseURL: 'http://localhost:8080/v1',
}); Start tapes with the matching provider flag: --provider anthropic or --provider openai.
The example repo also supports TAPES_POSTGRES_DSN as an environment variable, so you can configure the storage backend without changing code:
# .env
TAPES_POSTGRES_DSN="postgres://user:pass@localhost:5432/tapes" Team Setup
A typical team deployment:
- Provision a shared PostgreSQL database (Neon, Supabase, RDS)
- Add the DSN to each developer's
~/.tapes/config.toml - Each developer runs
tapes start claudeortapes servelocally - All conversations flow into the shared database
- Use
tapes deckor the API to query across the entire team's history
Next Steps
- AI SDK example — Full working example with PostgreSQL support
- Enable semantic search — Find conversations by meaning across the team
- Manage session history — Checkout and branch from past conversations
- Storage reference — Compare all storage backends