Inspiration
We were inspired when we watched a documentary for history class about WWI, where we saw how people deafened by the explosions struggled to communicate. We wanted to help fix this, and thus came the idea for UniSign. After a bit of research we found that there were no translator that could directly translate to another language. So as programmer; we built one.
What it does
First a image is taken on the frontend encoded and sent to python machine learning back end which decodes the image and feeds it into a custom *randomforestclassifier * model. This model then priedicts the letter in the hand sign. This letter is again sent back to font end and displayed. Now when a user selects something from the dropdown a use state is triggered. Sending a request to the second next js backend where it changes the word to the desired language using google translate api.
How we built it
we built it in 3 different parts. using scikit-learn, numpy, pandas, and pillow for image processing, randomforestclassifier for the machine learning model and next js, react, tailwind, react camera, google translate api for the js front and backend.
Challenges we ran into
Training the model was a challenge, since we had to tweak all the parameters, and it was a time consuming process. The hardest challenge was creating an API for the app to talk to the model, since we didn’t have experience with this. We also had multiple issue encoding and decoding files over the HTTP protocol as the CORS policy did not match up.
Accomplishments that we're proud of
We were able to make a function project which can actually help people with a disability. It would have even better if we could deploy it; but we ran onto some challenges. But other wise we are also proud of the fact that we were able to train our own model instead of using of using a prebuilt one and even get a good accuracy score on it 92%. And lastly we were able to do something unique that has not been done before. As there are many text sign language translators but very few sign language to text and none which can do more english. But I can assure you there is a lot of people who speak sign langue but not english.
What we learned
We learnt a lot of things most of them being related to how to train an AI model like where do you find the data; how to do you sort and clean up the data. what parameters to adjust to get the best result and lastly decoding and encoding images.
What's next for UniSign
There are few things that I would have liked to add but ran out of time for. First is real time translations by using a ML model to detect change, secondly is support for more languages as we current only have the 7 most used languages and lastly maybe a better UI.


Log in or sign up for Devpost to join the conversation.