Inspiration
Besides us, there are a lot of people who can not speak and have a hard hearing, represent about 5% of the people in the world, the deaf and mute people or let us say people with disabilities can not enjoy the life-like us from this perspective, we think of Hand Speaks to be the expressive meaning of those people.
What it does
Solution:
Hand Speaks is a collection of different features that required different software, we have a feature that helps those people to chat using their sign language, a feature to convert video content to 3d Avtar
How we built it
the solution of hand speaks is actually based on the 3D Avatar which will learn from our designed dataset that we are work on then translate it to 3D, then the model of Deep learning can learn to generalize the solution for any word or sentence that new to the Avtar.
Challenges we ran into
Collect Data is Difficult
What we learned
Teamwork , learning New Techniques in Deep learning Field
What's next for HandSpeaks
1-“Text-to-Sign” (T2S) and “Sign-to-Text” (S2T) (improved integration with hearing- and vocal-impaired people)
2- Integration with best(s) app(s) for visually-impaired people (seamless communication between impaired and non-impaired people)
Built With
- deeplearning
- python
Log in or sign up for Devpost to join the conversation.