Practice before it costs you.
DryRun is a voice AI workplace conversation simulator. Before you walk into a hard conversation — asking for a raise, resigning, delivering bad news — you practice it live against an AI playing the actual person, with their personality, their pressure tactics, and optionally their cloned voice.
- Setup — Choose a scenario (ask for a raise, resign, push back on a decision, etc.), configure who you're talking to, their personality, their name, and your biggest fear going in.
- Session — Have a live voice conversation with the AI playing the other person in character. Real-time tactic detection shows you when pressure moves are being used and suggests counter-moves.
- Debrief — Get a scored analysis of your performance: tactics the AI used, what you handled well, and specific fixes for next time.
| Layer | Tool |
|---|---|
| Framework | Next.js 16.2.1 (App Router, TypeScript) |
| Voice AI | Vapi (@vapi-ai/web) |
| AI model | GPT-4o-mini via Concentrate.ai |
| Search (Vapi tool) | Tavily |
| Voice cloning | Cartesia |
| Styling | Tailwind v4 + CSS variables |
app/
page.tsx # Setup screen — scenario config form
session/page.tsx # Live session — Vapi voice call + tactic sidebar
debrief/page.tsx # Post-session debrief — scored analysis
api/
debrief/route.ts # POST /api/debrief — analyze transcript via Concentrate
search/route.ts # POST /api/search — Tavily proxy for Vapi tool calls
clone-voice/route.ts # POST /api/clone-voice — Cartesia voice cloning
layout.tsx # Root layout — fonts, metadata
globals.css # Design system CSS variables + responsive classes
lib/
types.ts # Shared TypeScript interfaces
constants.ts # Scenario, role, personality dropdown data
vapi.ts # Vapi browser SDK singleton
npm installCreate .env.local in the project root:
# Vapi — get from dashboard.vapi.ai
NEXT_PUBLIC_VAPI_KEY=your_vapi_public_key
NEXT_PUBLIC_VAPI_ASSISTANT_ID=your_assistant_id
# Tavily — get from tavily.com
TAVILY_API_KEY=your_tavily_key
# Concentrate — get from concentrate.ai
CONCENTRATE_API_KEY=your_concentrate_key
CONCENTRATE_BASE_URL=https://api.concentrate.ai/v1
# Cartesia — get from cartesia.ai (only needed for voice cloning)
CARTESIA_API_KEY=your_cartesia_keynpm run devOpen http://localhost:3000.
In your Vapi dashboard, the assistant should have:
- A search tool pointing to
POST https://your-domain.com/api/search— used for real-time context during the session - Variable values:
scenario,personRole,personality,personName,userFear(these are overridden at call-start anyway)
The system prompt and first message are fully overridden at call-start via assistantOverrides, so the dashboard prompt is just a fallback.
When the user selects "Clone their voice" on the setup screen and uploads a ~15 second audio clip:
- The clip is sent to
POST /api/clone-voice - The server calls Cartesia's voice cloning API
- The returned
voiceIdis passed to Vapi viaassistantOverrides.voice
Requires CARTESIA_API_KEY to be set.
Deploy to Vercel:
vercel deployAdd all environment variables in the Vercel project settings under Settings → Environment Variables.