Inspiration
We believe that technology brings people together to create a 'global village'. We found that there was a disconnect between hearing and non-hearing people in this technological sphere; there was no easy way (save by looking up individual phrases on YouTube!) to translate from spoken English to ASL. We wanted to fix this!
What it does
- The program translates spoken words into ASL by listening to a speaker and displaying a video representation of the text in ASL. For example, if a user says "Hello!" into the program, it will display a video of a person signing Hello in ASL.
How we built it
Using a GUI built with tkinter, the user speaks and the program:
- Uses the speech_recognition library to capture spoken words
- Maps each phrase/letter to each video representation in ASL.
- Uses ffmpeg to 'stitch' videos quickly
- Uses OpenCV2 to play the video.
Challenges we ran into
- Overcoming package installation troubles
- Learning to use all new python libraries (tkinter, pyaudio)
- Troubleshooting in complex scenarios
Accomplishments that we're proud of
- Quick stitching and output of videos
- That it works!
- The accuracy of the speech recognition
What we learned
- How to ask meaningful questions to mentors / Google
- How to use many libraries via self-teaching
- How to manage our time and our expectations.
What's next for Pantomime
- Support for a wider range of words by expanding our video library
- More than one type of input (text, audio, etc)
- More than one type of output (various regional dialects of sign language)
- A wider range of user accessibility functions (slow playback, replay)
Log in or sign up for Devpost to join the conversation.