Inspiration
There’s a growing need to modernize how companies evaluate talent. Traditional interviews often rely on human perception, which can unconsciously skew decisions and overlook real capability. Research shows that 42% of women have faced biased or inappropriate interview questions. Studies from Harvard and the National Bureau of Economic Research reveal that applicants with “ethnic-sounding” names receive up to 50% fewer callbacks, and candidates from lower-income or non-Ivy League schools face systemic barriers despite equal qualifications. Bias extends far beyond gender or ethnicity - whether age, economic, racial, or educational. The result? Missed potential and outdated hiring practices that slow companies down.
TrueView reimagines the interview process through intelligent analysis. Using AI-powered video, audio, and language insights, it identifies patterns — like interruptions, tone shifts, or linguistic biases — that impact hiring objectivity. The platform translates these signals into actionable feedback, enabling organizations to make data-driven, fair, and future-focused decisions.
By bringing transparency and behavioral intelligence into hiring, TrueView isn’t just a tool — it’s a catalyst for smarter, tech-driven talent evaluation that helps companies evolve faster and hire better.
What it does
TrueView is an AI-powered interview platform designed to detect and reduce bias in real time. It records and analyzes interviews using video, audio, and language data, identifying subtle indicators of bias such as:
- Unequal speaking time or interruptions
- Tone disparities between interviewers and candidates
- Emotionally charged or gendered language
After each interview, TrueView generates a comprehensive bias and performance report that highlights potential red flags and provides actionable insights for interviewers.
Displays customized insights to each stakeholder:
- HR sees both performance and bias analysis.
- Interviewers see their feedback evaluated for tone and fairness.
- Candidates receive a concise summary of their strengths and growth areas.
Tech Stack Overview
- Frontend: React + TypeScript + Tailwind + ShadCN UI
- Backend: Flask (Python) REST API, meetings server hosted on Render
- AI Models:
- Whisper → Audio transcription
- Gemini 2.5 Flash → Bias, tone, and fairness analysis
- Video Processing: MediaRecorder API + FFmpeg
- Storage: Local JSON + mock HR data (extendable to Firebase/PostgreSQL)
Workflow
- Candidate and interviewer join the same interview room via WebRTC.
- The interview is recorded locally using the MediaRecorder API.
- After the interview ends, the video is uploaded to the Flask backend.
- Whisper transcribes the audio.
- The transcript is analyzed by Gemini, which returns structured bias and tone metrics.
- The frontend displays personalized AI reports for HR, interviewer, and candidate.
Challenges we ran into
- Audio Conversion Issues: FFmpeg occasionally failed when processing large .webm files, requiring multiple encoding retries.
- Prompt Engineering for Gemini: Getting balanced, JSON-structured AI responses demanded iterative tuning of prompts for fairness detection.
- Role-Based Access: Ensuring that HR, interviewers, and candidates only saw their respective data required careful local state and localStorage handling.
- Bias Quantification: Translating subjective fairness concepts (tone, interruptions) into numerical bias scores was a major design challenge.
- Frontend Integration: Seamlessly merging AI results into ShadCN components while keeping the UX simple and accessible.
- Deployment on Render: Integrating the frontend with the server backend on Render introduced CORS and routing complications. Configured proxy routes, API endpoints, and managed environment variables to ensure smooth communication between services.
- Debugging build failures and aligning Node.js and Python environments for production took extensive trial and error but resulted in a stable multi-service deployment pipeline.
Accomplishments we’re proud of
- Built a complete AI-driven bias detection system in under 10 hours.
- Achieved real-time transcription + AI fairness scoring through Gemini’s API.
- Designed an HR dashboard that visualizes performance and bias trends.
- Created distinct views for candidates, interviewers, and HR for maximum transparency.
- Delivered an intuitive interface with live camera feed, chat, and note-taking tools.
What’s next
- Scalable Cloud Backend: Migrate from local Flask to Firebase or AWS for enterprise-grade deployment.
- Zoom / Teams Integration: Automatically analyze live interviews through plugins.
- Expanded AI Metrics: Detect microaggressions, emotional imbalance, and diversity sentiment.
- Privacy Mode: End-to-end encrypted anonymization for all recordings.
- Fairness Analytics Dashboard: Track diversity, bias trends, and progress over time.
Log in or sign up for Devpost to join the conversation.