Inspiration

"Giving a voice to the voiceless."

That quote is the premise of our project. Coming from immigrant families, we are no strangers to the difficulties that language barriers pose. After seeing the lack of solutions for deaf people in engaging in video communication like Zoom and FaceTime, we sought to create a solution. Through our project, we hope to provide accessibility that was not there before.

What It Does

Sign Chat is a Machine Learning tool that detects a user's gestures via camera and interprets words and phrases in American Sigh Language (ASL) and then converts it to speech. The algorithm is trained to recognize ASL and can convert speech to 50+ languages. Sign Chat plugs into all video communication applications, directly providing speech output through the microphone.

Challenges we ran into

  • A lack of a cohesive dataset, other then than the WLASL Dataset, there is not much video data out there for ASL. We used the WLASL a bit for testing but mostly recorded our own data to train the algorithm
  • Training the algorithm, tuning parameters to accurately detect signs

Accomplishments that We're Proud of

  1. Building a functional ML model that successfully interprets sign language
  2. Allowing users to translate to over 50+ languages
  3. Building code with extensibility in mind, we provide code for users to train their own models easily and a script to scrape for data.

What We Learned

  1. Utilizing mediapipe to track user movement and store as data points
  2. Creating a LSTM (Long short-term memory) Recurrent Neural Network to decipher signs
  3. Using Google's Translate API to translate signs to various languages

How We Built It

Our Stack

stack

stack

stack

What's Next for SignChat

  1. A larger dictionary of words and phrases
  2. An extensively trained model that is assigned more data points
  3. A more intuitive user application

Built With

Share this project:

Updates