Inspiration
Our project was inspired by our teammate, Tiffany Zhang, who is really proficient in sign language. Her ability to sign fast and efficiently inspired us to create a program that makes learning and signing ASL easier for the general audience.
What it does
Our program uses Machine Learning through Object Classification and Image recognition to analyze our hands' symbols and compare them to our custom-made dataset which we have built and trained ourselves. Every 1.5 seconds, the camera analyzes the symbol the hands have made and outputs them as keyboard input, meaning you can use our program for general practice with ASL. Next, after signing out a word, you can add a space and the program will read the word out to you. Furthermore, after finishing a sentence, you can add 2 spaces to finish the sentence and the program will read the entire sentence out loud.
How we built it
We built it using Python, OpenCV, SKLearn, and TensorFlow, as well as a a lot of hard work and no sleep. , Furthermore, we individually photographed ~30,000 photos to help process our dataset and train our model.
Challenges we ran into
Debugging the Machine learning model to create and process our dataset and getting photos for said dataset proved to be a challenge that we encountered during this project. Still, through perseverance, we powered through and overcame these challenges.
Accomplishments that we're proud of
We are proud of the model accurately guessing what letter the symbol our hands made was as well as the ability for it to form coherent sentences.
What we learned
We learned how to create a machine-learning algorithm that detects hand gestures.
What's next for Sign 2 Speech
Add basic action gesture recognition https://drive.google.com/file/d/1tfp-sw5ARHymCv1PNRDM8A_lMqAyx-Od/view?usp=drivesdk
Built With
- cv2
- python
Log in or sign up for Devpost to join the conversation.