Inspiration

We thought that there were a lack of tools for translating sign into english, and we thought that we could use computer vision and machine learning in order to solve this problem. We believe that there needs to be a way to bridge the communication gap for those hard of hearing and others, and think that this tool might be progress towards a solution.

What it does

It's an app that can recognize letters in sign language and translate them into letters in the alphabet, when someone signs into the camera.

How we built it

The app is built in Flutter, which is a framework for apps on mobile, web, and desktop. The recognition of the sign language is done through a neural network.

We made this network by creating a lot of training data (signing letters into the camera), and then retraining a MobileNet to recognize those signs. All of the neural network training is done through the Tensorflow library.

Finally, we exported our trained tensorflow model onto our Flutter app, sent the video feed to the neural network, and that's it.

Challenges we ran into

We ran into a lot of challenges with trying to train a neural network to recognize our signs. First, we used a dataset of signs we found online, but we realized it didn't work with our hands. We tried for a long time to try and use different types of neural networks, different datasets, etc. but ultimately we made our own dataset and retrained the MobileNet object recognition network.

We also considered trying to add some sort of image filter to make the hands easier to identify, but ultimately we ended up not using that because we were a little rushed.

Accomplishments that we're proud of

We are proud of the fact we were able to get the flutter app running, since none of us have used flutter before. We are also proud that we found a way to get the neural network to recognize signs, even if it does have some trouble with some of the letters, because we spent a lot of time trying to make it work.

We are also really proud that we were able to accomplish something that is not very common, and might be able to actually help people.

What we learned

We learned a lot about the details of machine learning and how it trains and works on a very low level. We also learned a lot about image manipulation and trying to seperate background from foreground, and while trying to make our video we learned a lot about 3D rendering and Blender.

What's next for SignSeer

We might expand this into a full application, but for now it is just a demo.

Built With

Share this project:

Updates