This program will allow those you use ASL to communicate to interact with those who don't know ASL. The program recognizes individual letters and translates them in real time.
We wanted to make communication between people who use ASL and people who don't understand ASL easier.
The program takes real-time images and through Google's machine-learning software, recognizes the shape of the hand in front of the camera. The character should then be displayed by the GUI to translate each letter in real-time.
We took multiple images of ASL signs and ran them through Google's machine-learning software. We used the results from the machine learning and converted them into a variable that represents which letter is being shown. The GUI should then display these results to the user to make communication easier.
The biggest challenge we faced was when we were implementing the machine-learning code into our GUI. Because the programming language changed, the transfer became too difficult to accomplish in the time allotted. Another challenge was the amount of data needed to determine which ASL sign was being shown. Because we needed so much data to know which sign was being shown, we could only fully utilize 7 letters.
We're proud that we developed a way for ASL to be translated in real-time, which could be a breakthrough in communication for deaf people. We are also proud of D\developing a program that takes real-time images and understands what those signs mean.
We learned the difficulties of using image-based software to solve a problem, and the complex world of changing programming languages in code. We also learned some ASL.
We hope to be able to implement a fully functioning GUI to properly use the program. We also hope to be able to translate every letter in the alphabet, instead of only 7.
Log in or sign up for Devpost to join the conversation.