Inspiration:

The reason we decided to implement this project is because we believe that everyone loves music. We feel that music drives our emotions, and sometimes it's more convenient when an app can generate song tracks that your current mood displays without having to open Spotify and selecting songs. We also wanted to utilize AI/ML to create an innovative and convenient solution for people looking to personalize music recommendations.

What it does

Moodify utilizes a web application throughout the use of Streamlit. This web app captures an image from the user and analyzes their emotion using the pre-trained AI model DeepFace. From this emotion, the program outputs the closest emotion that relates to it's database of emotions ranging from happy, sad, neutral, anger, disgust, surprise, and fear. Then, the program queries the Spotify API for the closest song recommendations that relate to the emotion. After fetching the song recs, they are then embedded into the web-application. The users can conveniently play their songs from the web-application, thus making it easier for the user to explore more songs and add to their playlist accordingly.

How we built it

In developing moodify, we ulitized python and several libraries to create a web application that connnects emotion detection with music recommendations. First, set up the streamlit framework for user-interface , allowing users to capture images of their expressions through camera input. We used the deepface library to analyze these images to get the dominant emotion from the user's face. To enhance the user experience, we established the connection between the spotify api using client credentials which allowed us to fetch song recommendations based on detected emotions. The recommendations were dyanically generated by adjusting parameters such as danceability, energy and tempo according to the identified mood. Based on this, we outputted 8 recommended songs for the user to play.

Challenges we ran into

  • We found it difficult to integrate the front end and back end of the code
  • Implementing the Spotify APIs into interactive widgets for the user to click on and play the song directly from our website rather than having to search up the song manually
  • Using the client ID and the client secret ID to retrieve a token from the Spotify API to generate the songs based on the user emotion
  • Editing the background to certain parameters so that the front-end would be more smooth, symmetrical and aesthetically pleasing
  • The randomization of the songs so that even though the user may smile and show off a "happy" emotion, they wouldn't get the same songs over and over again.

Accomplishments that we're proud of

  • Working collaboratively to develop an interactive website
  • Being able to output a truly random set of 8 songs each time the user inputs a facial emotion through the camera.
  • Developing interactive widgets for the user to click on after the analysis to directly play the song through our website rather than having to search up each song on their own.
  • Accomplishing a high accuracy in detecting the user's emotion through the DeepFace learning model and OpenCV.
  • Using the Client ID and Secret Client ID to the access the Spotify API to output the songs recommended based on the user input.

What we learned

  • Through this project, we learned how to implement an AI-Trained Library in our code to predict data based on user input
  • We learned how to code in HTML, CSS and build interactive web applications that satisfy our backend requirements
  • How to implement the Spotify API and use token, Client ID and secret client ID to request data using our user input
  • We learned how to collaborate in a team environment to build different components of the program and coming together at the end to implement it as a whole "Full-Stack Project."

What's next for Moodify

  • Moodify can further be modified by utilizing a text analysis feature throughout the use of sentient analysis. This feature is already under implementation and requires some changes to be made for further development and integration within the entirety of the project.

Built With

Share this project:

Updates