Inspiration
For the first time in a while, we encountered an Instagram reel without background music. Our lives had never felt so barren.
From then on, we swore to have at least two of our senses stimulated at once. Behold, our app: a background music generator based on emotions in text.
What it does
Receives a text PDF from the user, assigns segments of different moods, and finds emotionally suitable background music while the user reads their text.
How we built it
Divided the work to frontend, backend, and model training. Did work on our own individual bits while helping out with each others' portions whenever necessary.
Challenges we ran into
1.) reaching rate limits during PDF submission 2.) issues with frontend being glitchy 3.) finding a good balance between precision and complexity of assigning more moods 4.) generally formatting the results in a way that would be well-acceptable by the users
Accomplishments that we're proud of
1.) successfully doing semi-supervised learning for categorisation of songs 2.) made a decent and functional frontend 3.) auramogged the dining hall with typeracer 4.) got some sleep at the end and had fun
What we learned
1.) practical application of frontend knowledge 2.) training of AI models 3.) backend deployment
What's next for TunedIN
1.) allow users to upload their own songs to be categorized 2.) allow users to rearrange the order of songs 3.) allow users to download their "playlists"
Built With
- fastapi
- python
- react
- tailwind
- typescript
- vite
Log in or sign up for Devpost to join the conversation.