Inspiration

Many candidates are strong at coding challenges but struggle with real-world coding skills such as reading and debugging existing code in addition to working effectively within a team. Traditional hiring processes often rely too heavily on CVs and algorithm problems, which do not reflect how engineers actually work on the job.

We were inspired by the idea that being a good developer is more than writing code from scratch. In real work environments, developers spend much of their time understanding unfamiliar codebases, identifying and fixing bugs, and communicating clearly with teammates.

This project aims to:

  • Evaluate debugging and code-reading skills, not just coding speed
  • Simulate real workplace problem-solving scenarios
  • Help HR teams assess candidates beyond their resumes
  • Avoid limiting strong candidates due to background, credentials, or CV formatting

By focusing on practical skills and interactive challenges, we hope to create a fairer and more realistic way to identify candidates who can truly contribute to a team.

What it does

Based on HR-defined requirements, the platform automatically generates interview challenges that target specific skills such as debugging, code reading, logic, and testing specific skills. collaboration-oriented problem solving.

An AI-powered engine dynamically adapts these problems and continuously tracks candidate performance, including:

  • How candidates approach unfamiliar code
  • Debugging efficiency and problem-solving strategies
  • Decision-making patterns under constraints
  • Progress and consistency across challenges

This allows HR teams to evaluate candidates using objective, skill-based signals rather than relying solely on CVs or static test scores, while candidates are assessed in scenarios that closely resemble real-world work.

Role of the AI Helper

The AI Helper acts as an intelligent observer and evaluator throughout the interview process. Rather than simply checking whether an answer is correct, it focuses on how the candidate thinks and works.

Specifically, the AI Helper:

  • Analyzes the quality and clarity of candidate responses
  • Tracks problem-solving strategies, debugging steps, and reasoning patterns
  • Monitors progress, hesitation points, and iteration behavior
  • Adapts follow-up questions based on observed strengths and weaknesses
  • Generates a comprehensive performance report for HR and recruiters

This report highlights practical skills such as code comprehension, debugging ability, and decision-making—helping hiring teams assess whether a candidate can succeed in a real work environment, not just pass a test.

How we built it

  1. HR Account & Token Setup

    • HR teams create an account on the platform and define what skills they want to evaluate (e.g. debugging, code reading, logic, collaboration).
    • Based on these requirements, the platform generates a custom evaluation token.
    • This token encodes the challenge configuration and becomes the entry point for candidates.
    • HR can then securely share the token or challenge link with applicants.
    • Candidate Flow (Challenge as Quests)
    • Candidates access the platform using the HR-provided token.
    • The token unlocks a sequence of interactive challenges (quests) tailored to the required skills.
    • Candidates must explore, reason, debug, and adapt to progress—mirroring real-world work scenarios.
    • Each step, decision, and attempt is tracked in real time by the AI helper.
    • HR Dashboard & Performance Tracking
    • HR teams can monitor candidate status (not started, in progress, completed).
    • View where candidates dropped off or struggled during the challenges.
    • Review completion paths and reasoning patterns.
    • Analyze AI-assisted insights on problem-solving behavior, debugging approach, and answer quality.
    • Access a structured performance report for each candidate, supporting fair and skill-based hiring decisions.

Why This Over Traditional Platforms?

Compared to tools like HackerRank:

  • Less focus on syntax and keystrokes
  • More focus on reasoning and exploration

Pros

  • More engaging
  • Less work for hiring managers
  • Lower stress for candidates
  • More realistic cases of real-world problem solving

Challenges we ran into

  • Designing challenges that are tricky but fair, encouraging reasoning rather than guesswork
  • Avoiding bias toward specific technical backgrounds or prior interview experience
  • Ensuring AI assistance enhances evaluation without replacing candidate thinking
  • Making the experience intuitive without giving explicit instructions
  • Integrating the candidate sending and invitation system with Gumloop while keeping tokens secure and correctly mapped to HR-defined evaluations
  • Ensuring reliable delivery and tracking of challenge links sent to candidates
  • Synchronizing candidate progress and results with the HR dashboard in real time

Accomplishments that we're proud of

  • Delivering a complete, working concept under tight time constraints, from idea to functional prototype.
  • Turning an abstract idea (“measure how candidates think”) into a concrete, testable experience.
  • Designing a non-traditional assessment flow that replaces standard interviews with interactive challenges.
  • Integrating AI in a way that supports evaluation rather than giving direct answers.
  • Building an experience that is engaging for candidates while reducing workload for hiring managers.
  • Iterating quickly based on feedback and constraints instead of aiming for perfection.

Despite limited time and resources, we prioritized core functionality and user experience, proving the concept is viable and extensible.

What we learned

  • How to design and connect a frontend and backend into a cohesive, full-stack application
  • Building interactive challenge flows using React on the frontend and Flask on the backend
  • Integrating and leveraging AI APIs to evaluate answer quality and reasoning, not just correctness
  • Using Gumloop to automate candidate invitation and token-based sending workflows
  • Designing an HR dashboard to track candidate progress, status, and performance results
  • Managing state across tokens, sessions, and users in a secure and scalable way
  • Balancing technical complexity with a smooth and intuitive user experience
  • Collaborating efficiently under hackathon time constraints

What's next for WhyDidItCrash.com

  • Expanding the platform with more challenges that evaluate a wider range of skills, including system design, collaboration, communication, and debugging
  • Opening the platform to more domains beyond software engineering, such as data, security, product, and technical support roles
  • Using AI to generate larger and more diverse question sets, tailored to specific roles and seniority levels
  • Enhancing AI evaluation to deliver deeper insights into candidate reasoning, adaptability, and learning patterns
  • Integrating seamlessly with existing hiring and ATS platforms, enabling easy adoption by HR teams
  • Introducing multi-stage and team-based challenges to better simulate real on-the-job scenarios

Vision

Instead of asking candidates to prove themselves in artificial interviews, this platform invites them to figure things out naturally. Those who succeed don’t just pass a test — they demonstrate the skills companies are actually looking for.

Share this project:

Updates