Inspiration

We were inspired by the idea of using computer vision to convert hand gestures to language. Although we originally wanted to do sign language, we realized the scope of this would take too long for twenty four hours. Therefore, we decided to focus on hand gestures made with the Latin alphabet. We were also inspired by the blackboard themed tables at BostonHacks.

What it does

This is an online blackboard. Using a Microsoft Kinect, this application will track the hand movements of a person. It will then keep track of the coordinates, draw a picture of the coordinates, and then convert this picture to text using Computer Vision. It then displays this text and allows the user to save it, along with a variety of other options.

How I built it

We used C# to work with the Kinect, and Node.js for the server side, client side Javascript, HTML, and CSS for the front end, Tesseract for the computer vision OCR, and GraphicsMagick for drawing the picture from the coordinates.

Challenges I ran into

We wanted to do sign language originally, this was difficult to accomplish. We found that the Kinect was not always accurate with drawing the letters and it would sometimes jump around. We also found that Tesseract wasn't 100% accurate either. We tried to teach Tesseract with ML so that its image recognition would improve, and we decreased computer usage and improved our algorithm so that the Kinect could better recognize images.

Accomplishments that I'm proud of

We think that this is a good project to have made in twenty four hours considering the fact that we had no experience with C#, CV, Kinect or ML before.

What I learned

We learned a large amount about working with hardware, computer vision, and machine learning.

What's next for KinectYourBlackboard

We would like to pursue this design further so that it can work with fingertip depth recognition so that writing can be quicker and more efficient.

Share this project:

Updates