Inspiration

The "Leaky Pipeline" between academic knowledge and industry performance. Most CS students can solve an algorithm on a whiteboard, but they freeze during the "Think Aloud" phase of a technical interview or struggle to explain their systems-level choices. We wanted to build a high-fidelity "flight simulator" for software engineering a tool that bridges the gap between coding in isolation and performing under the scrutiny of a tech interviewer.

What it does

Our platform is a dual-context interview simulator that monitors your IDE and conducts a behavioral interview simultaneously. Using a synchronized React architecture, it tracks live code changes, manages a strict interview timer, and utilizes low-latency Text-to-Speech to simulate real-world pressure.

  • Live Code Analysis: The AI doesn't just see the prompt; it watches your logic evolve in the editor.

  • Behavioral Voice Synthesis: Integrated TTS turns static text into an immersive conversational experience.

  • Session Persistence: Intelligent session handling allows for instant state wipes on navigation, ensuring candidate privacy and a "clean slate" for every practice run.

  • Automated Grading: Once the timer expires, the system provides an instant breakdown of both technical correctness and communication style.

How we built it

Real-Time Context Synchronization: We implemented a custom Observer Pattern using React Context. This allows the LLM to maintain "mechanical sympathy" with the user's IDE, effectively watching the code evolve and providing feedback based on the user's actual logic rather than just the final output.

Low-Latency Prompt Engineering: To ensure the interviewer feels responsive, we optimized our backend to stream responses token-by-token using Groq’s inference engine.

Deterministic Session Management: We engineered a robust state machine to handle the interview lifecycle. By separating the initialization logic from the chat loop, we ensured the AI maintains a consistent identity and problem-context throughout the session without "drifting" or hallucinating.

Browser-Native Voice Integration: Instead of relying on expensive, high-latency cloud TTS, we utilized the Web Speech API for instant behavioral responses. This allows MochA to run entirely in the browser with zero additional latency for voice feedback.

Data Integrity & Privacy: We built a custom cleanup protocol that utilizes the React unmount lifecycle to purge sensitive interview data from localStorage. This ensures that every "MochA" session is atomic and private, leaving no trace on shared machines.

Challenges we ran into

  • Version Control & Parallel Workflows: With multiple teammates working across overlapping branches and shared context files, we frequently encountered complex merge conflicts. We overcame this by adopting a more modular file structure and improving our team communication to sync on breaking changes.

  • Rapid Prototyping vs. Technical Debt: Under the strict time constraints of a hackathon, our initial focus on speed led to tightly coupled code. This made scaling features mid-project difficult, forcing us to pause and refactor our React Context providers to ensure a more maintainable, decoupled architecture.

  • Environment Orchestration: Managing a complex web of dependencies and sensitive API keys for LLM and TTS services proved challenging for onboarding. We streamlined this by standardizing our .env configurations and creating a unified setup protocol for the team.

  • Steep Learning Curves: Integrating several high-level technologies—specifically Next.js 14, the Vercel AI SDK, and the native Web Speech API required rapid self-study. We had to master the nuances of React Server Components and asynchronous streaming states while simultaneously building the core product.

Accomplishments that we're proud of

  • Seamless LLM Integration: Successfully engineered a robust pipeline between our frontend and the LLM, enabling complex, context-aware interviewing that feels natural and responsive.

  • High-Fidelity Real-Time Interaction: Overcame the technical hurdles of asynchronous streaming to build a live chat interface that handles real-time dialogue without the "lag" typically found in standard AI implementations.

  • Context-Aware IDE Integration: We are particularly proud of our live coding editor, which allows the LLM to "observe" the candidate's logic as it’s being written. This creates a unique feedback loop where the AI can provide hints or critiques based on active code, not just a final submission.

  • Gamified Growth Analytics: Developed a comprehensive user statistics system that tracks performance across multiple interviews. By visualizing progress on the user profile with a gamified approach, we’ve turned the stressful process of interview prep into a rewarding journey of skill acquisition.

What we learned

  • LLM Orchestration & Prompt Engineering: We moved beyond simple API calls to master the complexities of AI streaming. We learned how to manage asynchronous state, handle mid-stream interruptions, and engineer system prompts that maintain a consistent "interviewer" persona throughout a technical session.

  • The Art of Parallel Development: Building a multi-faceted platform in 24 hours taught us the importance of modularity. We learned how to define clear API contracts and Context boundaries early on, allowing teammates to build the IDE, the Timer, and the Chat components simultaneously without stepping on each other's toes.

  • Advanced Version Control & Git Flow: We gained significant experience in managing a high-velocity repository. From resolving complex merge conflicts under pressure to maintaining a clean commit history, we learned that disciplined branching and frequent "sync-ups" are just as important as the code itself.

What's next for MochA

Advanced Gamification & Progression

We plan to implement a Ranked Interview System where users can climb "skill tiers" (e.g., Junior, Senior, Staff) based on their consistency and performance metrics.

  • Skill Trees: Visualizing a user’s progress in specific domains like Memory Management, System Design, or Communication Clarity.

  • Streaks & Milestones: Encouraging daily discipline through a "Deep Work" streak system, rewarding users for consistent practice blocks.

High-Performance Specializations

We want to move beyond generic coding questions and offer "Deep-Dive" tracks tailored to the most competitive industries:

Realistic "Stress-Test" Environments

To truly simulate the pressure of a top-tier interview, we plan to add:

  • Dynamic Interruptions: Training the AI to interrupt mid-explanation—just like a real interviewer—to test the candidate’s ability to regain focus.

  • Live System Failure Simulations: Randomly "breaking" the user's environment or adding new constraints halfway through a problem to test adaptability and poise under fire.

  • Multimodal Feedback: Integrating sentiment analysis to give users feedback on their tone, pace, and confidence during the behavioral segments.

Built With

Share this project:

Updates