UNL is an AI-powered mock interview coach designed to help students and early-career candidates practice interview responses and receive actionable feedback on both what they say and how they present themselves.
This project combines a React + Vite frontend with a FastAPI backend to deliver:
- Prompt generation from either a built-in prompt bank or pasted job descriptions.
- Real-time posture and eye-contact tracking during interview responses.
- Audio transcription + speaking style analysis.
- LLM-generated interview feedback and scoring.
- A final results dashboard and downloadable report.
Demo video: https://youtu.be/a-7Rl7wk_L0
UNL began as a response to a common frustration in today’s job market: candidates often never hear back or receive meaningful interview feedback. UNL aims to close that gap by simulating interview conditions and returning practical coaching that users can apply immediately.
- User enters interview preferences (question type + difficulty) and can optionally paste a job ad.
- System generates an interview prompt.
- User gets a thinking window, then a timed response window.
- Frontend tracks posture and eye contact while recording audio.
- Backend analyzes:
- transcript quality and content,
- vocal delivery characteristics,
- posture/eye timeline data.
- Results are displayed in a structured dashboard with strengths, improvement areas, and a next-step action plan.
- React 19 + Vite
- MediaPipe Tasks Vision
- Recharts
- CSS
- FastAPI + Uvicorn
- Groq/OpenAI-compatible SDK (for LLM + transcription)
- Librosa + NumPy + pydub (audio processing)
- ReportLab + Matplotlib (PDF/report chart generation)
- JavaScript / Python
- Git + GitHub
Interview-Bot/
├── backend/ # FastAPI API, prompt services, analysis pipeline
├── frontend/ # React app, interview flow, MediaPipe tracking, results UI
└── README.md # Project overview (this file)
For folder-specific setup and architecture, see:
cd backend
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reloadBackend default: http://127.0.0.1:8000
cd frontend
npm install
npm run devFrontend default: http://127.0.0.1:5173
- Open frontend URL.
- Select interview settings.
- Start interview and answer question.
- Review generated feedback on the results page.
- Keeping frontend and backend communication stable throughout the interview lifecycle.
- Uploading and processing recorded audio reliably.
- Tuning prompts so model output remained structured and useful.
- Iterating on CSS/UI to keep the experience clean and intuitive.
- Maintaining a consistent JSON response format across services.
- Built a fully functional end-to-end prototype under tight time constraints.
- Integrated live vision analysis, audio transcription, and AI feedback in one workflow.
- Produced a practical coaching tool that users can repeatedly train with.
- Better collaboration workflows with Git/GitHub.
- Practical audio ingestion and analysis for product use-cases.
- How to integrate LLM-driven feedback into a full-stack app.
Planned improvements include:
- More polished production UX.
- Accessibility enhancements (such as text-to-speech support).
- Historical tracking of interview sessions and progress over time.
- Stronger deployment/ops readiness for public usage.