Inspiration

Sometimes in lecture you need to point to something tiny on the presentation but no one really knows what you're pointing to. So we decided to build something that can read where you are pointing using a camera, and points a laser in that direction, which makes engagement in lectures and presentations much more accessible. We also realized that this idea actually branches off into a lot of other potential accessibility applications; it allows robotic control with pure human actions as input. For example, it could help artists paint large canvases by painting at where they point, or even be used as a new type of remote control if we replaced the laser with an RF signal.

What it does

It tracks where the user's forearm is using a fully custom-built computer vision object detection program. All dimensions of the forearm are approximated via only one camera and it is able to generate a set of projected XY coordinates on the supposed presentation for the laser to point to. This corresponds to where the user is pointing at.

How we built it

We heavily used OpenCV in Python to built the entire computer vision framework, which was tied to a USB webcam. Generated projection points were sent to an ESP32 via wifi, which fed separate coordinates to a dual-servo motor system which then moves the laser pointer to the correct spot. This was done using arduino.

Challenges we ran into

First of all, none of us had actually used OpenCV on a project this size, especially not for object tracking. This took a lot of learning on the spot, online tutorials and experimenting There were also plenty of challenges that all revolved around the robustness of the system. Sometimes the contour detection software would detect multiple contours, so a challenge was finding a way to join them so the system wouldn't break. The projection system was quite off at the start, so a lot of manual tuning had to be done to fix that. The wifi data transmission also took a long time to figure out, as none of us had ever touched that stuff before.

Accomplishments that we're proud of

We're quite proud of the fact that we were able to build a fully functional object tracking system without any premade online code in such a short amount of time, and how robust it was in action. It was also quite cool to see the motors react in real time to user input.

What we learned

We learned some pretty advanced image processing and video capture techniques in OpenCV, and how to use the ESP32 controller to do stuff.

What's next for Laser Larry

The biggest step is to make the projection system more accurate, as this would take a lot more tuning. Another camera also wouldn't hurt to get more accurate readings, and it would be cool to expand the idea to more accessibility applications discussed above.

Built With

Share this project:

Updates