Inspiration

Mock interviews suck because feedback is vague. “You said um a lot” isn’t actionable. I wanted a coach that watches how I speak in real time, measures it, and tells me exactly what to fix before the next rep.

What it does

VibeCheck is an AI interview coach with three parts:

Pre-game: Paste a job link or description. We auto-extract company, role, and key skills, then generate a focused question bank.

Live interview: A Tavus video agent interviews you. While you talk, we track WPM, pause rate, fillers/min, basic pitch movement, and emotion trends.

Post-game: You get a session timeline, strengths, and 2-3 concrete drills to improve on the next run. Sessions are saved so you can compare runs.

How we built it

Frontend: Next.js + React + TypeScript + Tailwind.

Interview agent: Tavus web component + API for persona, context, and objective wiring.

Real-time analytics:

Audio stream via WebRTC + Web Audio API

STT for words and timing, then compute WPM and pauses

Simple filler detection on transcript tokens

Basic pitch estimate from audio frames

In-browser emotion signal using @xenova/transformers (GoEmotions family) for lightweight sentiment on transcript chunks

State & storage: Session stats are cached client-side; structure is ready to persist to a backend later.

UI: Live tiles for metrics, session timeline, and “End & Review” flow.

Challenges we ran into

Browser ML quirks: Getting @xenova/transformers models to load reliably without auth hiccups was touchy.

Latency trade-offs: Real-time analytics vs. accurate STT is a tightrope. We tuned buffer sizes and batched updates to keep UI snappy.

Agent handoff: Making Tavus persona, system prompt, and objectives configurable via API while keeping the UI simple took iteration.

Signal hygiene: Microphone gain, background noise, and browser differences wreak havoc on pitch and pause detection.

Accomplishments that we're proud of

A working end-to-end rep: job parsing → live interview → immediate analytics and review.

On-device analysis path for emotion and filler detection to keep data local by default.

A clean, focused UI that doesn’t drown users in charts and actually leads to next steps.

What we learned

Real-time UX lives or dies by incremental updates and back-pressure.

Simple heuristics plus light ML beats heavy models for hackathon-speed feedback.

Agent quality depends far more on a sharp persona + objectives spec than on model choice.

What’s next for VibeCheck

Backend + history: Persist sessions, show trendlines, and surface “delta since last run.”

Deeper coaching: Skill-tag answers, detect STAR structure, and auto-generate targeted drills.

Question packs: Company- and role-specific banks with difficulty scaling.

Export & sharing: One-pager summary for mentors or career services.

Privacy controls: Clear toggles for local-only vs. cloud processing and data retention.

Built With

Share this project:

Updates