This is a Next.js landing page for Intervue.org with integrated AI interview functionality using HeyGen's streaming avatar technology.
- Modern, responsive design with gradient backgrounds
- Hero section with call-to-action buttons
- Feature sections highlighting AI interview capabilities
- Sign-up modal for user registration
- Real-time Avatar Streaming: Uses HeyGen's streaming avatar technology
- Speech Recognition: Built-in browser speech-to-text functionality
- AI-Powered Responses: OpenAI GPT integration for intelligent interview questions
- Emotion Detection: Face-api.js integration for facial expression analysis
- Webcam Integration: Real-time video capture for user interaction
- Frontend: Next.js 15, React 19, TypeScript
- Styling: Tailwind CSS, Radix UI components
- AI Avatar: HeyGen Streaming Avatar SDK
- AI Responses: OpenAI GPT-3.5/4 API
- Speech Recognition: Web Speech API
- Emotion Detection: Face-api.js with TensorFlow.js
-
Install Dependencies
npm install
-
Environment Variables Create a
.env.localfile with:OPENAI_API_KEY=your_openai_api_key_here HEYGEN_API_KEY=your_heygen_api_key_here -
Run Development Server
npm run dev
-
Access the Application
- Landing page:
http://localhost:3000 - Interview page:
http://localhost:3000/interview
- Landing page:
- Click "Start Free Trial" on the landing page
- Fill in your name and role information
- Click "Enter Interview" to begin
- Grant camera and microphone permissions
- Click "Start Interview" to begin the AI interview session
- Voice Interaction: Speak naturally with the AI interviewer
- Text Chat: Type responses in the chat drawer
- Real-time Feedback: See your facial expressions analyzed
- Session Control: Start/stop interview sessions as needed
├── app/
│ ├── interview/
│ │ └── page.tsx # Interview page component
│ └── page.tsx # Landing page
├── components/
│ ├── ai-interviewer.tsx # Main interview interface
│ ├── signup-modal.tsx # User registration modal
│ └── ui/ # Reusable UI components
├── hooks/
│ └── use-ai-interview.ts # Interview logic hook
├── lib/
│ └── interview/
│ ├── openai-assistant.ts # OpenAI integration
│ └── av-helper.ts # Audio/Video helper
└── public/
└── worker-emotion.js # Emotion detection worker
- Uses HeyGen's streaming avatar API for real-time avatar generation
- Supports multiple avatar personalities and languages
- Handles token-based authentication
- Dual-mode operation: fast stateless responses and stateful conversations
- Customized for interview scenarios
- Supports role-specific interview questions
- Browser-native Web Speech API
- Continuous listening with echo cancellation
- Automatic pause/resume during avatar speech
- Required: Modern browsers with Web Speech API support
- Recommended: Chrome, Firefox, Safari (latest versions)
- Features: Camera access, microphone access, WebGL support
- The emotion detection worker runs in a separate thread for performance
- All API keys are currently hardcoded for demo purposes
- Face-api.js models need to be downloaded to
/public/models/for emotion detection - The system gracefully degrades when features aren't available
- API keys should be moved to environment variables in production
- Implement proper user authentication and session management
- Add rate limiting for API calls
- Consider data privacy for video/audio streams
- User authentication and session management
- Interview history and analytics
- Custom avatar selection
- Multi-language support
- Interview scoring and feedback
- Integration with job application systems