Inspiration

Our team had always been interested in machine learning and AI models, but did not have as much as experience as we desired. While discussing how we can apply this interest to a real-world field, we noticed that there are not many fully interactive platforms for users to learn American Sign Language, as there are with many other languages. With accessibility in mind, we decided to begin with the heart of this type of application, the ML model that would determine if users are signing correctly.

What it does

The model we built of the course of the weekend is able to interpret motion through the webcam and output what trained phrase is being signed.

How we built it

Built in a jupyter notebook, we utilized TensorFlow and the adjacent Keras libraries in order to develop a model we can train. Then, using the mediapipe library, we were able to map out keypoints of the human body and establish the landmarks that will be used as tracking data. Using open-cv, we were able to use the device webcam to capture frames of the motion and gather data to train our model on.

Challenges we ran into

We found that training the model could prove quite challenging, since similar signs would track similarly to the landmark data. While we have not fully fixed this issue, we combated this for elementary signs included in our project by gathering more data for training and increasing the confidence threshold necessary for a prediction to be displayed.

Accomplishments that we're proud of

We are proud of how we were able to adapt our model to minimize loss and successfully recognize key ASL signings.

What we learned

We learned more about how flexible basic machine learning principles can be in terms of real-world applications. While we understood basic statistical analyses, we did not previously understand how they could be expanded to real-time models like we have developed this weekend. We also learned more specific functions of Tensorflow, OpenCV, and other libraries we used.

What's next for SignScore

This model is only meant to be the centerpiece for a greater project intended to help non-ASL speakers learn how to sign. SignScore will be developed into a web application that acts similarly to services like Duolingo, containing lessons and specialized questions for users to learn about ASL. The model created this weekend will be applied to allow users to use their webcams in order to test out their signing abilities throughout lessons.

Built With

Share this project:

Updates

posted an update

In order to run the script successfully, there are two steps missing from the GitHub readMe. Prior to "step 8", you must run the cell that initialize the actions1 and actions2 variables and the cell that concatenates them together. Or you may just run a new cell with actions = np.array(['hello', 'goodbye', 'thank you', 'how', 'are you', 'take care']) I apologize for any confusion.

Log in or sign up for Devpost to join the conversation.