MoodMelody - Intelligent Music Automation
Inspiration
As students and developers, we've all experienced the struggle of maintaining focus while managing our music during work sessions. The constant need to switch playlists or find the right music that matches our current task and mood often breaks our concentration. This inspired us to create MoodMelody - a smart music companion that automatically adapts to both your activity and emotional state.
What it does
MoodMelody is an intelligent music automation system that:
- Takes periodic screenshots of your work environment every 15-20 seconds
- Analyzes your facial expressions through your webcam in real-time
- Uses AI to understand both your current activity and emotional state
- Automatically curates and plays Spotify music that perfectly matches your context
When you're studying math with a positive mood, it plays uplifting study music. Switch to gaming with excitement? It seamlessly transitions to energetic gaming tracks. During reading sessions while feeling calm, it selects peaceful ambient music - all automatically without any manual intervention.
How we built it
- Frontend: React.js with TailwindCSS and shadcn/ui components
- Backend: Flask server with Python for core logic
- AI/ML: CLIP model for screenshot analysis, OpenCV for facial expressions
- Database: SQLite/MySQL for user management
- APIs: Spotify integration for music playback and custom endpoints for analysis
- Real-time Processing: Screenshot capture and emotion detection every 15-20 seconds
Challenges we ran into
- Optimizing real-time processing of screenshots and facial analysis while maintaining performance
- Fine-tuning the CLIP model for accurate activity detection with low latency
- Developing robust algorithms to translate visual and emotional context into appropriate music choices
- Implementing secure handling of user data for screenshots and webcam access
- Ensuring seamless integration between multiple AI models and the Spotify API
Accomplishments that we're proud of
- Created a fully automated music selection system that understands both activity and mood
- Successfully integrated multiple complex technologies (CLIP, OpenCV, Spotify API)
- Built a responsive and intuitive user interface
- Implemented real-time screenshot analysis and emotion detection
- Developed a secure system handling sensitive user data
What we learned
- Advanced integration of multiple AI models in a real-world application
- Real-time image processing and analysis techniques
- Complex API integrations with Spotify's platform
- Building responsive full-stack applications with React and Flask
- Importance of user privacy and secure data handling
What's next for MoodMelody
- Personalized music learning based on user preferences
- Support for multiple music streaming platforms
- More sophisticated activity detection algorithms
- Collaborative features for shared workspaces
- Expanded emotion detection capabilities
- Integration with productivity tracking tools
Built With: Python (Flask, OpenCV, PyTorch), React.js, TailwindCSS, CLIP Model, Spotify API, SQLite/MySQL, Custom LLM Integration
Created with ❤️ at MadHacks 2024
Log in or sign up for Devpost to join the conversation.