Inspiration

Spotify and other music streaming services have become popular in demand. And for the most part, people now a days listen to music while doing almost any task-- whether or not it may be safe to do so. While on the road, drivers are susceptible to being distracted and taking their eyes off the road because of a song that comes on in their playlist. With the new Hands Free Law, it's become increasingly more dangerous to even have your phone in your hand, which calls for a safer way of navigating through music while driving. With our application, users can use simple facial gestures in order to play, stop, skip, and navigate through their playlists while still being able to drive distraction free.

What it does

It uses facial recognition to detect what gestures are made in order to control your music. Emotions vary from Happy, sad, surprised, angry, and disgust. Gestures recognized include eyes closed, eyes raised, lip pucker and mouth open. Takes these features and allows you to "hands-free" control your music more conveniently.

How we built it

We used Unity3D and C# to build for iOS and other platforms

Challenges we ran into

We wanted to use new languages with the new concepts that we've been learning to combine features such as AR. However, trying to learn these languages while also building a project served to be very time consuming because we ran into API connection problems, debugging problems, lack of documentation, and ultimately we had to switch platforms and languages all together because many features were not compatible with the languages we tried using. Unity provides various ways to create projects, and many of the newest things such as AR/ VR, similar to what we're doing, is still a new subject.

Accomplishments that we're proud of

The image recognition is a feature that we definitely are proud of. We wanted to be able to see that music would respond to emotions and being able to measure emotions and translating that into events in code was a big step in the direction we want to go.

What we learned

The process towards building your idea is very important. The design process is the next step and requires research to determine which route would be best. What we learned was that more research should be invested into the project before the coding begins.

What's next for Recognize

Connecting it with a suitable music API and embedding it into a mobile application created with something like React Native in order to work on the UI elements. Also, we plan on adding features that will allow for data collection from the application, for example, Artists can get feedback from the emotions that arise while users are interacting with their music. Also from this data, the music platforms themselves can obtain data that can be used to organize and assemble playlists that are catered towards different moods. In addition, further enhancement of the visual effects of emotion would be a nice feature we did want to add such as mood graphs.

Built With

Share this project:

Updates