Inspiration
We envisioned a world where technology tunes into your feelings—literally. EMOIFY was born from the idea of transforming passive music apps into emotionally intelligent companions that feel with you, bridging the gap between emotion and entertainment through AI.
What it does
EMOIFY detects your real-time facial expressions using a CNN and OpenCV, identifies your emotion, and instantly plays mood-matching music. It also empowers trainers to retrain the model with new data, making emotion detection smarter over time.
How we built it
We used Flask for the backend, MongoDB for user/trainer data, and OpenCV with CNN for facial emotion recognition. A custom trainer dashboard enables continuous emotion data collection and model retraining—making EMOIFY intelligent and adaptiv
Challenges we ran into
Training a high-accuracy CNN model with limited emotion data was tough. Ensuring real-time emotion detection via webcam without lags, and securely managing separate user/trainer roles in one system, were complex but rewarding hurdles.
Accomplishments that we're proud of
We built a real-time emotion-based recommender that feels personal. The trainer dashboard for adaptive model training was a big win. Creating a dynamic, modular, and scalable system from scratch in limited time was a major achievement.
What we learned
We deepened our skills in CNNs, OpenCV, Flask, and MongoDB. We discovered the power of adaptive learning and user feedback loops. Most importantly, we learned how emotion and AI can together create truly human-centric tech experiences.
What's next for EMOIFY
We plan to expand the emotion database for cultural diversity, integrate with Spotify/YouTube APIs, add voice-based emotion cues, and develop a mobile app version. EMOIFY will keep evolving—just like the emotions it reads.


Log in or sign up for Devpost to join the conversation.