Inspiration
We noticed how many people struggle to communicate in public spaces, especially individuals who are deaf, non-verbal, or have speech impairments. Existing solutions are either too slow, too expensive, or not accessible to everyday users. We wanted to build something simple, fast, and universal: a tool that turns everyday hand gestures into clear, spoken communication.
What it does
Signify recognizes real-time hand gestures using computer vision and translates them into spoken words or text.
Users perform a gesture
Our model identifies it instantly
The app outputs audio + text so anyone around can understand
Signify aims to make communication accessible, inclusive, and barrier-free.
It even has a 67 detector!
How we built it
Used Python, OpenCV, and MediaPipe for real-time hand tracking
Trained a gesture-recognition model using a custom dataset, then optimized it with NumPy, scikit-learn, and joblib
Integrated text-to-speech to produce clear audio output
Used threading to keep gesture recognition fast and responsive
Clean UI built with Python
Challenges we ran into
MediaPipe installation issues and dependency conflicts
Making the gesture recognition accurate across different lighting and backgrounds
Keeping the app fast enough to run in real-time
Designing gestures that are easy for users to learn but distinct enough for the model to classify
Managing multiple threads without lag or freezing
Accomplishments that we're proud of
Built a fully working prototype in one weekend
Achieved stable real-time gesture recognition
Created an accessible tool that can genuinely help people communicate
Developed a clean UI + intuitive user experience
Learned how to use AI/ML in a practical, meaningful way
What we learned
How to integrate computer vision, machine learning, and text-to-speech
How to optimize a model for speed without sacrificing accuracy
The importance of accessibility-focused design
How crucial teamwork, debugging, and version control are during a hackathon
How to build an MVP quickly while staying user-centered
What's next for Signify
Expand gesture library to full sign-language alphabets
Add customizable gestures for users with unique accessibility needs
Build a mobile app version (iOS + Android)
Improve accuracy with neural networks or TensorFlow Lite
Add conversation history and downloadable transcripts
Eventually turn Signify into a fully accessible communication platform
Log in or sign up for Devpost to join the conversation.