Inspiration
Technical interviews for IB, Quant, and SWE roles aren’t just about getting the right answer, they test how you think, communicate, manage time, and recover when stuck. Yet most prep tools focus on static problem banks or passive mock interviews with no real feedback loop.
We were inspired by the gap between solving practice problems and performing in a real interview. We wanted to build a system that adapts to a candidate’s skill level, simulates authentic interviewer interaction, and evaluates the full interview signal just like a real hiring process.
What it does
Our platform is an end-to-end technical interview simulator that replicates real interviews for IB, Quant, and SWE roles.
Users select a role and target company, complete a short diagnostic, and receive an ELO-based skill rating. From there, the system dynamically recommends problems in the user’s growth zone, prioritizing weak areas while reinforcing strengths.
Users then interview with a live AI voice interviewer that asks follow-ups, gives adaptive hints, and reacts in real time as they think aloud. During the session, the system monitors code, diagrams, and time usage to assess problem-solving approach, communication clarity, correctness, and efficiency.
After each interview, users receive a detailed performance report, and updated ELO score creating a closed feedback loop that continuously adapts their preparation.
Crucially, we integrated a dedicated Learn Tab to close the knowledge gap. If a user struggles with a specific concept, they don't just get a text solution; they can generate on-demand videos with prompts. These AI-generated videos visually break down the intuition behind the logic, turning the platform from a simple testing ground into an active teaching tool.
How we built it
We built DynoMock as a modular AI system with several integrated components working in concert:
We designed a cognitive AI system around a decision flow with four key stages - Perception, Reasoning, Planning, and Action. The Perception layer uses a Diagnostic & Data Ingestion Engine that analyzes role requirements, company specifics, candidate level, historical performance, and real-time speech and coding activity. The Reasoning layer employs an LLM Interviewer Agent powered by OpenAI GPT-4 to evaluate problem-solving approaches and identify areas for improvement. The Planning layer features an Adaptive Recommendation & Hint Engine that dynamically adjusts difficulty and generates targeted follow-ups. Finally, the Action layer executes live interviews and produces structured evaluations.
For the frontend, we used React, Next.js, and Three.js to create an interactive dashboard with integrated code editor, whiteboard, and interview interface. The backend runs on Node.js and Python to handle user management, ELO-based skill tracking, and interview orchestration. We integrated LiveKit Cloud for real-time voice capabilities including speech-to-text and text-to-speech. The Overshoot API powers our screen monitoring, code parsing, and error detection features. Data persistence is managed through PostgreSQL/MongoDB for user data and problem databases, with Redis handling real-time session state.
Challenges we ran into
Real-time Voice Integration: Synchronizing natural conversational flow with technical problem-solving proved complex. We had to ensure the AI could handle interruptions, maintain context across multiple exchanges, and provide timely responses without disrupting the candidate's thought process.
Adaptive Difficulty Calibration: Building an ELO-based system that accurately gauges candidate skill level and adjusts interview difficulty in real-time required extensive testing and fine-tuning to avoid interviews that were either too easy or frustratingly difficult.
Screen and Code Monitoring: Implementing reliable screen monitoring and live code analysis through the Overshoot API while maintaining performance and providing meaningful feedback without overwhelming the candidate was technically challenging.
Balancing Realism with Accessibility: Creating an interview experience realistic enough to provide valuable preparation while remaining accessible and not intimidating for candidates at various skill levels required careful prompt engineering and system design.
Accomplishments that we're proud of
True 24/7 Autonomous Operation: We've created a system that genuinely operates continuously across all time zones, providing consistent interview experiences regardless of when candidates need to practice.
Zero Emotional Bias: Our AI delivers completely objective, data-driven assessments free from the unconscious biases that can affect human interviewers, ensuring every candidate receives fair evaluation.
Comprehensive Multi-Modal Evaluation: We successfully integrated voice interaction, code quality analysis, and structured feedback into a single cohesive system that evaluates all critical dimensions of technical interviews.
Democratizing Access: By eliminating the high costs associated with traditional mock interview coaching, we've made high-quality technical interview preparation accessible to candidates who previously couldn't afford it.
Dynamic Personalization: Our adaptive difficulty system ensures each candidate receives a truly personalized experience that evolves with their skill level, making practice time maximally efficient.
What we learned
We learned that effective interview simulation requires not just technical knowledge but deep understanding of industry-specific interview patterns across Software Engineering, Quantitative Finance, and Investment Banking.
Building natural conversational AI that can handle the nuances of technical interviews - including strategic hints, follow-up questions, and adaptive pacing - required sophisticated prompt engineering and state management.
We discovered that candidates benefit most from structured, actionable feedback across specific dimensions (problem-solving, communication, code quality, time management) rather than generic assessments.
Our decision to build with modular components (separate Diagnostic, Reasoning, Planning, and Action engines) proved crucial for iterating quickly and maintaining system reliability.
What's next for DynoMock
We plan to extend beyond Software Engineering, Quantitative Finance, and Investment Banking to cover Data Science, Product Management, Machine Learning Engineering, and other competitive technical roles.
Adding peer comparison metrics (anonymized), leaderboards for motivation, and the ability to share anonymized interview experiences to help candidates learn from each other's journeys.

Log in or sign up for Devpost to join the conversation.