Inspiration

  • I love working with machine learning and computer vision, so a sign language interpreter seems like the best way to combine those two ideas and make a real impact!

What it does

  • Signify is a project that uses your camera and machine learning to read sign language!

How we built it

  • Signify uses the Mediapipe library in order to mark landmarks on your hand, which is then fed into a Tensorflow Machine Learning model in order to teach and predict specific sign language symbols!

Challenges we ran into

  • Figuring out how to track motion instead of specific symbols
  • Everything had to be set up perfectly in order to have it run smoothly, read data correctly, and not crash (it crashed a lot during the creation process)
  • Figuring out a data creation method for the machine learning algorithm Tracking not only one hand, but two

Accomplishments that we're proud of

What we learned

  • How to use machine learning in general, going even farther updating it live and getting results quickly

What's next for Signify

  • Recognize words quicker
  • string words into full sentences
  • and teach the model even more words to be able to speak even more!

Built With

  • mediapipe
  • opencv2
  • python
  • tensorflow
Share this project:

Updates