I love working with machine learning and computer vision, so a sign language interpreter seems like the best way to combine those two ideas and make a real impact!
What it does
Signify is a project that uses your camera and machine learning to read sign language!
How we built it
Signify uses the Mediapipe library in order to mark landmarks on your hand, which is then fed into a Tensorflow Machine Learning model in order to teach and predict specific sign language symbols!
Challenges we ran into
Figuring out how to track motion instead of specific symbols
Everything had to be set up perfectly in order to have it run smoothly, read data correctly, and not crash (it crashed a lot during the creation process)
Figuring out a data creation method for the machine learning algorithm
Tracking not only one hand, but two
Log in or sign up for Devpost to join the conversation.