Inspiration This project was inspired by the idea of offering personalized music experiences through advanced technology. By combining facial emotion recognition with AI-driven song recommendations, the goal was to create a tailored playlist experience for users. It draws insights from Gemini AI and integrates with Spotify to enhance music recommendations based on mood and listening history.
What It Does The project merges facial emotion analysis with song recommendations:
- Facial Emotion Analysis: Uses the DeepFace library to analyze emotions from an uploaded image.
- Song Recommendations: The Gemini AI API generates song suggestions based on the user's mood and Spotify listening history.
- Spotify Integration: Fetches the user's recent listening history via the Spotify API for accurate recommendations.
How We Built It
- Emotion Analysis: Implemented with DeepFace, analyzing images to determine the dominant emotion.
- Spotify API: Managed authentication to access user listening history using client credentials.
- Gemini AI Integration: Configured the API to suggest songs based on mood and listening data.
- Error Handling: Added retry logic for API requests to ensure reliability.
Challenges
- API Integration: Difficulties with authenticating Spotify and Gemini AI.
- Error Handling: Implemented retry logic to ensure system reliability.
- Token Management: Faced issues managing and refreshing API tokens.
Accomplishments
- Successful integration of DeepFace, Spotify, and Gemini AI.
- Developed personalized song recommendations based on real-time emotion analysis and listening history.
- Built a robust system with error-handling measures.
What We Learned
- Gained experience in API integration, emotion analysis, and error handling.
What’s Next
- Enhance the UI/UX.
- Improve recommendation algorithms.
- Broaden integrations with additional music services.
Built With
Python, DeepFace, Gemini AI, Spotify API, TensorFlow, Requests


Log in or sign up for Devpost to join the conversation.