Inspiration
The inspiration behind Interview.AI emerged from our collective experiences with the challenges of preparing for job interviews. It can be difficult to find someone to practice interviews with, and often they may not fully understand the structure of an interview from the interviewer’s perspective. By leveraging cutting-edge natural language processing and machine learning technologies we provide a mock interview platform equipped with a 2D avatar interviewr that is always ready to support your preparation.
What it does
Interview.AI is an AI-powered mock interview platform designed to enhance job preparation through personalized and interactive features. It generates custom questions based on the company, job description, interview type, and candidate's resume, ensuring relevant practice sessions. The platform includes an audio and video display of the AI interviewer, creating a realistic interview environment. The platform provides an overall summary and tailored advice at the end of each session, analyzing both the content and emotional behavior of the user's response, and then giving feedback on the strengths and areas for improvement. Additionally, Interview.AI offers a detailed interview transcript for review and continuous learning, empowering candidates to build confidence and improve their interview skills.
How we built it
Frontend/Backend: Full stack web application using React.ts, Jango with Rest framework, and PyMongo. AI Pipeline: The pipeline will initiate an LLM agent using GPT-4o as a mock interviewer and input the interviewee's information. During the interview loop, the agent will:
- Generate interview questions using GPT-4o.
- Generate audio from text using OpenAI TTS.
- Generate video using Wav2Lip on Intel Developer Cloud Instance. After the user responds, the agent will:
- Convert the response audio to text using the Gorq API.
- Feed the text to GPT-4o for the next question.
- Analyze the emotion of the response through the Hume API. By the end of the interview practice, the agent will provide overall interview feedback on the interviewee's strengths and weaknesses based on the emotional analysis and the question-answer history using another GPT-4o request.
Challenges we ran into
- Deploy to Speech2Face model. First, we tried many ways to deploy the model into the cloud and enable the API to call it from the backend. Many model deployment interfaces either have a huge latency or are not supported through an API call. Eventually, we decided to use Intel’s Cloud Developer instance, which perfectly solved our problem and provided a low-latency, easy-to-integrate solution. The runtime of Speech2Face was originally high, so we could not create instant video replies without boosting performance. To solve the problem, we used the Intel special optimization algorithm and OpenVINO to optimize the model, which provided us with a 20% speed improvement.
- Prompt engineering It was a trial and error process to adjust the prompts to get suitable outputs in the formats we wanted to return. We ended up using some strategies including giving the gpt a role and asking it to keep its response under a word limit in some cases.
Accomplishments that we're proud of
- We were able to combine the APIs from many of the sponsors to achieve the application's goals.
- We were able to figure out how to add our model to a cloud server and could directly call it from our backend.
- We used the amazing model optimization algorithms from Intel to boost our model by 20%.
What we learned
- Intel technologies: Cloud developer instance, OpenVino
- Django rest framework + MongoDb integration
- Many API frameworks from Groq, Hume, OpenAI. FastAPI
- How to host a model in the cloud and call its inference function using an SSH API.
What's next for Interview.AI
- We will also support video input so the Hume API can give the user feedback on facial expressions and body language.
- We will also support the interviewee's training since most companies currently do not provide a great way for interviewees to practice interviewing candidates except by participating in shadow interviews and attending workshops.
- We can improve the efficiency and quality of the video output to make the output video more realistic.
Built With
- django
- gorq
- hume
- intel-xeon
- mongodb
- openai
- python
- pytorch
- typescript

Log in or sign up for Devpost to join the conversation.