Inspiration
The motivation for our ASL Model project is to provide a more accessible and efficient means of communication for individuals who are deaf or hard of hearing. American Sign Language (ASL) is a complex language that uses hand gestures and facial expressions to convey meaning. However, not everyone is proficient in ASL, which can create communication barriers for those who rely on it. By developing a model that can recognize and translate signed letters into text, we hope to bridge this gap and enable smoother communication for all. Additionally, this project can have potential applications in fields such as education and healthcare, where ASL interpretation is essential but not always readily available.
What it does
Our model works by taking live video and feeding it through TensorFlow to use the data from people's hand gestures. We feed that data into a model that we trained with around 25 thousand pictures that returns a sign language letter with up to 75% confidence!
Challenges we ran into
We started with no knowledge of TensorFlow or React.js so it was all a challenge.
Accomplishments that we're proud of
We made a working neural network that we trained ourselves and a perfectly working front end to display it.
What we learned
TensorFlow and react.js
What's next for ASLModel
A better working model
Log in or sign up for Devpost to join the conversation.