Inspiration
Many of us have had that awkward experience, where the person controlling the aux cord is horrible at picking music. Oftentimes in that situation, we don't want to speak up, and let our feelings be know. That's how this idea was born, We wanted to build an application that would track whether a song "bumps", and let the aux cord holder, know that they are terrible at picking music without saying it.
What it does
The user, playing music, will set up their phone, so it can track everyone's faces. As music plays, our custom model will determine if the song "bumps". After the song stops playing, the user stop the video, feeding the model, at which point, they will receive a report revealing if the song "bumps".
How we built it
The application was built with Dart and flutter, the API that tracks the user's face, was built with Python and Flask. Our custom model was also built with Python and Jupyter Notebooks
Challenges I ran into
We originally tried to build our model, with the use of API's, but couldn't find any that were suitable, and so we had to build a Custom Model, although we are happy with how it turned out, we were definitely wracking or brains trying to debug it.
Accomplishments that I'm proud of
On the fronted, It was our first time working with flutter, and we managed to build a good looking application within the first 10 hours.
What's next for DoesItBump.ai
We will probably tweak things and look into adding new features
Log in or sign up for Devpost to join the conversation.