Voice-based technical interview simulator powered by AI
π Try it Live
The idea for Offscript came from a painful realization: talented engineers were failing interviews not because they couldn't solve problems, but because they couldn't explain their solutions. Traditional coding practice platforms like LeetCode train you to write code, but they don't prepare you for the most critical part of the interview β the conversation.
Offscript bridges that gap. It's a voice-based technical interview simulator that helps engineers practice real interview scenarios through live conversation. By integrating voice AI, real-time code analysis, and intelligent feedback, Offscript creates a realistic interview environment where you can practice explaining your thought process, defending your approach, and thinking out loud β just like in a real interview.
- ποΈ Voice-First Interviewing - Practice explaining your approach naturally through conversation, powered by Vapi AI
- π» Live Code Editor - Write code in real-time with syntax highlighting and multi-language support (Python, JavaScript, Java, C++, Go)
- π€ AI-Powered Feedback - Get detailed performance ratings on communication, problem-solving, and implementation using Google Gemini
- π Structured Evaluation - Receive letter grades (A+ to F) across three key categories with actionable feedback
- π Full Transcript Recording - Review complete conversation transcripts with timestamps
- π― Real LeetCode Problems - Practice with authentic coding challenges covering arrays, strings, trees, and linked lists
- β±οΈ Interview Timer - Built-in 45-minute timer to simulate real interview time constraints
- π Dark/Light Mode - Comfortable coding interface with theme support
- π± Responsive Design - Optimized desktop experience for focused practice
Offscript is built as a modern full-stack application with separate frontend and backend services:
- Framework: Next.js 15 with React 19 and TypeScript
- UI Components: shadcn/ui with Tailwind CSS for beautiful, accessible interfaces
- Code Editor: Monaco Editor (VS Code's editor) for professional code editing experience
- Voice Integration: Vapi AI Web SDK for real-time voice communication
- State Management: React hooks and context for managing interview state
- Routing: Next.js App Router for seamless navigation
- Framework: FastAPI for high-performance Python API
- Database: SQLite for storing interview transcripts and ratings
- AI Integration:
- Vapi AI: Handles voice-to-text conversion and natural language processing
- Google Gemini 2.5: Powers intelligent transcript analysis and grading
- Endpoints:
/api/leetcode- Serves random coding problems with solutions/api/vapi_webhook- Receives real-time voice data during interviews/api/transcript- Stores and retrieves interview transcripts/api/rate- Generates AI-powered performance ratings
User Voice β Vapi AI β Backend Webhook β Real-time Transcript
β
User Code β Monaco Editor β Code Context β AI Interviewer
β
Interview End β Transcript Storage β Gemini AI Analysis β Performance Report
- Next.js 15 - React framework with App Router
- React 19 - UI library
- TypeScript - Type-safe development
- Tailwind CSS 4 - Utility-first styling
- shadcn/ui - Component library
- Monaco Editor - Code editor
- Vapi AI Web SDK - Voice AI integration
- Lucide React - Icon library
- next-themes - Theme management
- FastAPI - Modern Python web framework
- Google Gemini AI - Advanced language model for analysis
- SQLite - Lightweight database
- Pydantic - Data validation
- Uvicorn - ASGI server
- python-dotenv - Environment management
- Node.js 20+ and npm/yarn
- Python 3.11+
- Vapi AI Account - Sign up here
- Google Gemini API Key - Get it here
-
Navigate to the backend directory
cd backend -
Create a virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Create
.envfiletouch .env
-
Add your environment variables
GEMINI_API_KEY=your_gemini_api_key_here
-
Run the backend server
uvicorn main:app --reload --host 0.0.0.0 --port 8000
The backend will be available at
http://localhost:8000
-
Navigate to the frontend directory
cd frontend -
Install dependencies
npm install
-
Create
.env.localfiletouch .env.local
-
Add your environment variables
NEXT_PUBLIC_VAPI_ASSISTANT_ID=your_vapi_assistant_id NEXT_PUBLIC_VAPI_PUBLIC_KEY=your_vapi_public_key
-
Run the development server
npm run dev
The frontend will be available at
http://localhost:3000
Frontend:
cd frontend
npm run build
npm startBackend:
cd backend
uvicorn main:app --host 0.0.0.0 --port 8000-
Visit the Homepage - Navigate to
http://localhost:3000or offscript.codestacx.com -
Click "Start Practicing" - Choose your practice path
-
Grant Microphone Permission - Allow browser access to your microphone
-
Click "Start Interview" - The AI interviewer will begin
- Speak naturally - Explain your thought process as you would in a real interview
- Write code - Use the Monaco editor to implement your solution
- Ask questions - The AI interviewer responds to clarifying questions
- Track time - Keep an eye on the 45-minute countdown timer
- Review transcript - See real-time transcription of the conversation
-
Click "End Interview" - Confirm you want to finish
-
Wait for Analysis - Gemini AI processes your transcript (takes ~10-20 seconds)
-
Review Feedback - Get detailed grades and suggestions:
- Communication Grade - How clearly you explained your approach
- Problem-Solving Grade - Your analytical and algorithmic thinking
- Implementation Grade - Code quality and correctness
- Overall Assessment - Key strengths and 3 improvement points
Problem: Keeping the voice conversation synchronized with code changes without lag. Solution: Implemented debounced code context updates and WebSocket-based communication through Vapi AI's SDK, ensuring smooth real-time interaction without overwhelming the backend.
Problem: Providing meaningful, specific feedback that goes beyond generic advice. Solution: Engineered detailed prompts for Gemini AI with structured output schemas (Pydantic models), ensuring consistent, actionable feedback across three evaluation dimensions with letter grades and specific examples.
Problem: Making the AI interviewer feel like a real person, not a robotic questioner. Solution: Integrated Vapi AI's conversational AI capabilities with custom assistant configuration, allowing for natural follow-up questions, clarifications, and adaptive responses based on candidate input.
Problem: Storing complex interview data (transcripts, metadata, ratings) efficiently. Solution: Designed a SQLite schema with JSON serialization for flexible transcript storage, supporting both raw conversation data and structured evaluation results with timestamps.
- Seamless Voice Integration - Built a production-ready voice interview system in 48 hours
- AI-Powered Analysis - Created an intelligent evaluation system that provides genuinely helpful feedback
- Developer Experience - Crafted a polished, professional interface that engineers actually want to use
- End-to-End Solution - Delivered a complete product from voice input to detailed performance reports
- Voice AI Complexity - Working with real-time voice requires careful state management and error handling
- Prompt Engineering - Structured outputs from LLMs need precise prompt design and schema definition
- Full-Stack Integration - Coordinating TypeScript/React frontend with Python/FastAPI backend taught us valuable lessons about API design
- User-Centric Design - Interview practice is deeply personal β the UI must be calming and focused, not distracting
- Company-Specific Modes - Practice interviews styled after Google, Meta, Amazon, etc.
- Video Recording - Record yourself for body language analysis
- Progress Tracking - Dashboard showing improvement over time
- Custom Problems - Allow users to upload their own coding challenges
- Behavioral Interview Practice - Expand beyond technical to STAR-method behavioral questions
- System Design Interviews - Voice-guided architecture discussions with diagramming
- Mock Interview Matching - Connect users for peer-to-peer practice sessions
- Mobile App - iOS/Android apps for practice on the go
- Multi-Language Support - Interface and AI interviewer in multiple languages
We welcome contributions! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please ensure your code follows the existing style and includes appropriate tests.
- Follow TypeScript/Python best practices
- Write clear commit messages
- Update documentation for new features
- Test voice features manually before submitting
This project is licensed under the MIT License - see the LICENSE file for details.
- HackHarvard 2025 - For providing the platform and inspiration
- Vapi AI - For powerful voice AI infrastructure
- Google Gemini - For intelligent transcript analysis
- Vercel - For seamless frontend deployment
- All Open Source Contributors - For the amazing tools that made this possible
Built with β€οΈ by the Offscript Team at HackHarvard 2025
Questions or feedback? Open an issue or reach out!