Inspiration

In 2021, researchers found that teenagers from 13 to 18 spend an average of nearly 9 hours on their screens-- nearly a 30% increase from 2015 when teens spent 6 hours on their screens. Nowadays, teenagers are spending around 3 hours daily on activities like gaming, surfing the web, etc. The only conclusion that can be derived from these stats is that teenagers are staying in too much!

What it does

Moodify is an interactive application that picks a song that accurately matches the vibe of a submitted image from your list of favorite songs! You can link your Spotify account and upload an image of your choice to our app. With our trained GeminiAPI, Moodify accurately depicts specific moods that would help select the song that would fit the picture perfectly. We want to motivate our users to get out there and enhance their personal outside experience by gamifying the process of snapping beautiful world-views.

How we built it

We built our application with the following:

  • SpotifyAPI
  • GeminiAPI
  • Reflex assisted with TailwindCSS

We used SpotifyAPI's WebAPI and SpotifyAuth which allowed us to get permission from users to use their data when deciding to choose what song would best fit their chosen image. To determine the best mood, we used GeminiAPI to utilize computer vision to make its best assumption. We constructed a prompt so that it could pick from a list of moods for an image. Once the mood is obtained, we send that mood back to the SpotifyAPI-- after getting the access token-- to find the user's specific "Made for you" mix that aligns with the chosen vibe. Then, we select the top song to match with our user's image!

We specifically built our app to be used best on mobile because, realistically, for our app to reach our goals, we need our users to get out and use their phones to take pictures. As a result, a lot of our design choices were made with this in mind. We also considered the regional use of our app because we wanted our app to be usable universally. However, this is covered through the fact that our app is personally tailored to each user's music tastes, so they will always have a song that they will understand / vibe to.

Challenges we ran into

Some of the challenges that we ran into was figuring out how to implement Reflex and SpotifyAPI. Reflex lacked documentation (understandable as they're new) so it was difficult to figure out what was wrong with some of our code throughout our journey. Not only this, but it was very unconventional as there was no separation of the backend and frontend. SpotifyAPI also proved to be a challenge as the documentation was thick and thorough. As a result, there were a lot of roadblocks regarding the authorization code, the access token, and parsing through the data that we were given.

Accomplishments that we're proud of

We were really proud of being able to set up a full stack project and specifically, Daphne, felt enthusiastic about experimenting with a new design style. Some of us felt accomplished for finally doing something with computer vision with Gemini and building a prompt. It was also very rewarding going into depth of SpotifyAPI's documentation and not riding on the coattails of tutorials.

What we learned

We learned a lot about how to use GeminiAPI, SpotifyAPI, and Reflex.

What's next for Moodify

We would love to see Moodify implemented in larger social media apps like Instagram, where, at the click of a button, it's able to generate a fitting song to save its user time!

Built With

  • figma
  • geminiapi
  • python
  • reflex
  • spotifyapi
Share this project:

Updates