What it does

It allows to convert ASL (American Sign Language) to English text. The goal is to make it easier for deaf people to communicate with people who don't know ASL, which is most of us.

How we built it

We first trained our model using images that we took ourselves, and then used to model to detect common phrases in ASL. We then used OpenAI API to convert the detected phrases to English sentences, since ASL does not respect correct grammatical rules. For the frontend, we used React and Vite. We used AWS for the hosting.

Challenges we ran into

We did not find the datasets we needed on the internet. Most datasets online included one picture of each phrase which was not enough to train our models. For this reason, we had to capture all the images from different angles ourselves. This was time-consuming but necessary. Another challenge was cleaning the array of detected words in a way that we wouldn't get duplicates so that our final sentence would make sense.

Accomplishments that we're proud of

We're proud of having been able to achieve what we did in such a short amount of time. It was a challenge to do everything from scratch but at the end, we were able to make something that has potential to become a real application in the future.

What we learned

We learned a lot about both coding and how the ASL works. We did not know how deaf people communicated with each other before this hackathon. Also, we did not know how to train a model using images, which we learned during this hackathon.

What's next for Signify

To make Signify more useful, we would definitely need to include many more phrases to it. The model is currently limited due to the limited time that we had to train it, but with more time, we will have the chance to complete our dataset and make the application more powerful. Also, to make it more convenient for the users to use Signify, we could convert the detected ASL phrases to other languages than English and also generate speech.

Share this project:

Updates