Inspiration
Introducing DriveMate: a mobile app redefining accessible and safe driving. With over 1.6 million crashes annually from cell phone distraction, we considered ways of leveraging intuitive gestures and voice controls for hands-free messaging and emergency detection.
DriveMate maintains critical communication with loved ones and in emergency situations through making calls, sending texts, and signaling emergencies without ever taking your hands off the wheel. Our AI-powered emotion detection model also monitors for signs of drowsiness, alerting the user and playing music to enhance self-awareness.
What it does
1) We detect signs of drowsiness through a custom-trained emotion-detection model with 2) Gesture different hand signs indicating different actions (calling, emergency, messaging) 3) Voice-activated calling and messaging with live speech-to-text transcriptions with Whisper
How we built it
We trained 400+ sized dataset to detect gestures and 8 videos to fine-tune the Hume-AI model for universal signs of drowsiness. We built out application in React Native with a backend in flask.
We also calc On the design side, we built everything in Figma, with an emphasis on dynamic spatial UI that can detect a person's hand gestures and adapt to their position on the screen.
Accessibility
We focused heavily on accessible gestures that could be performed by anyone across all motor control capabilities. We highlighted large motor skills rather than fine motor skills with open-hand gestures and easy-to-perform movements useable for anyone.
In addition, we ensure that our colors meet the WCAG Color Contrast standards so that our app is accessible to any eyesight level.
Challenges we ran into
Our entire team was new to React native development. While we did have experience with web-based React development, we decided to explore new waters to deploy an app accessible to all drivers. It was challenging exploring new libraries and applying it into an unfamiliar dev environment, however the
Accomplishments that we're proud of
Our application involved integrating many complex features like training our own model for both detecting complex human emotions and motor gestures. We are proud of developing a fully fledged, end-to-end mobile app integrating live gesture and emotion detection with a actionable recommendations inside of a summary after each session.
What's next for DriveMate
1) more voice accessible options for people with audio impairments. This will involve training more diverse datasets and including more gesture-driven actions. 2) optimized, voice-activated route detection. For example, when a user says "gas", they can locate the nearest gas station without needing to access their phone.
Built With
- expo.io
- figma
- flask
- hume-ai
- media-pipe
- ngrok
- open-ai
- python
- react-native
- tailwind
- tensorflow
- whisper





Log in or sign up for Devpost to join the conversation.