Inspiration
The inspiration for this project came from one of our team members, who occasionally wants or needs to create digital art but doesn't have an iPad or drawing tablet. As a result, they're forced to draw using a clunky mouse. TableTablet aims to solve this problem by providing an extremely affordable alternative to a drawing tablet, requiring only a webcam. In addition, we realized that we could extend this idea to allow people to turn any surface into a touchscreen for their computer, almost like magic!
What it does
The Python program first uses OpenCV to get a webcam stream, and prompts the user to select four points of a rectangular surface. Then, whenever the user's fingers pass over that surface, an AI detects the key landmarks of the hand and translates the position of the index finger into actions on the user's computer (e.g. clicks or mouse movements). To translate accurately, the program uses an algorithm that smoothens the inputs (making it easier to draw curves and straight lines and making the mouse less jittery), and uses a perspective transform to ensure a uniform mapping.
How we built it
TableTablet was built in Python using OpenCV to take in a camera feed and perform perspective correction. The hand detection was done using Google's MediaPipe library, and the results were translated to computer instructions using various libraries.
Challenges we ran into
One of our biggest challenges was that the hand detection slightly "jitters" around . To solve this, we developed an algorithm that "smoothens" the returned hand movements. Detecting clicks was also frustrating because of the perspective that the webcam saw the surface with.
Accomplishments that we're proud of
We're proud that we were actually able to accurately detect the position of the user's fingers in three dimensions and translate that into clicks, drags, and movements on the computer. We're also proud of our perspective correction and smoothing algorithms that allow for a better user experience.
What we learned
As this was pretty much our entire team's first experience with computer vision, we learned a lot about the technologies used in that field, such as OpenCV, perspective correction, and working with AI. We also learned the importance of effective communication and collaboration.
What's next for TableTablet
In future versions of this project, we hope to continue finding ways to ensure a smooth hand and click detection that can allow the program to compare with a real tablet. We also would like to use edge detection to automatically calibrate the perspective rather than prompting the user to give that information, and we would like to add support for using multiple fingers or hands at once.






Log in or sign up for Devpost to join the conversation.