Inspiration
"Giving a voice to the voiceless."
That quote is the premise of our project. Coming from immigrant families, we are no strangers to the difficulties that language barriers pose. After seeing the lack of solutions for deaf people in engaging in video communication like Zoom and FaceTime, we sought to create a solution. Through our project, we hope to provide accessibility that was not there before.
What It Does
Sign Chat is a Machine Learning tool that detects a user's gestures via camera and interprets words and phrases in American Sigh Language (ASL) and then converts it to speech. The algorithm is trained to recognize ASL and can convert speech to 50+ languages. Sign Chat plugs into all video communication applications, directly providing speech output through the microphone.
Challenges we ran into
- A lack of a cohesive dataset, other then than the WLASL Dataset, there is not much video data out there for ASL. We used the WLASL a bit for testing but mostly recorded our own data to train the algorithm
- Training the algorithm, tuning parameters to accurately detect signs
Accomplishments that We're Proud of
- Building a functional ML model that successfully interprets sign language
- Allowing users to translate to over 50+ languages
- Building code with extensibility in mind, we provide code for users to train their own models easily and a script to scrape for data.
What We Learned
- Utilizing mediapipe to track user movement and store as data points
- Creating a LSTM (Long short-term memory) Recurrent Neural Network to decipher signs
- Using Google's Translate API to translate signs to various languages
How We Built It
Our Stack


What's Next for SignChat
- A larger dictionary of words and phrases
- An extensively trained model that is assigned more data points
- A more intuitive user application
Log in or sign up for Devpost to join the conversation.