Magic Yahoos

Jin Lin, Emmanuel Obe, Joseph Rodelas, Anton Vladimirov

Inspiration

Since the theme involved magic, our team was inspired by the Interactive Wands at Universal's Wizarding World of Harry Potter. These wands use highly reflective material at the end of the wands, reflecting infrared light at a module that actuates the “magical event”.

We wanted to turn this concept into something more interactive with the user, directly translating their wand movements into a drawing magically recreated in front of their eyes with the help of a 3D printer.

What it Does

The Magic Yahoo incorporates an RGB LED magic wand, a computer with a camera module, and a 3D printer. The magician aims the magic wand at the camera, which is activated by holding down the button. After the drawing is finished, the button will be double-pressed to change the signal for the drawn image to be sent to the 3D printer.

The program converts and then places the drawn file over a grid to convert each movement into a vector. These vectors can then be converted into 3D printer files known as “G-code”. Using the 3D printer, we magically recreate your drawing in real time, the same way you drew it.

Design Methodology & Process

Our design revolved around 2 main deliverables: Capture the wand motions into a time-stamped animation Send custom G-code to the 3D printer to draw without printing

Choosing the Wand

We aimed to have our wand in three states. State 1: Default state. When in this state, the computer vision should not do anything. State 2: Ending state. When in this state, stop drawing and send gcode script. State 3: Drawing state. When in this state, actively parse the information and store it.

To make each state distinct enough from the others to be easily parsed by the computer vision software, we decided to have each state represented by a color. The easiest way to have a singular method of representing multiple colors was to use an RGB LED that can switch between multiple colors with a singular LED. Green would be assigned to the default state, red to the ending state, and blue to the drawing state.

Capturing the Magic

To not get overwhelmed by the scope, we derived several demo scripts to test the sub-functions that went into capturing the wand movements.

Pre-Wand

While Emmanuel handled building the LED magic wand hardware, Jin focused on translating finger/LED movement into trackable data in software. Using the local Python library OpenCV (cv2), we capture webcam frames with cv2.VideoCapture, convert them to HSV with cv2.cvtColor, and isolate the LED color using cv2.inRange. The mask is cleaned with cv2.erode and cv2.dilate, and the LED position is detected by finding contours (cv2.findContours) and computing the blob center with cv2.moments.

Next, we turned live tracking into a repeatable animation by storing each detected center point in a list over time. The path is re‑played on a blank whiteboard window using cv2.line and cv2.circle, which makes it easy to confirm the drawing before exporting. Our final pre‑wand step was overlaying a grid on the mask to convert the wand movement into quantifiable steps. We draw the grid and labels with cv2.line and cv2.putText, then snap each detected LED point to the nearest grid cell. Those snapped points become movement vectors (line segments), which are saved as CSV and later converted into pen‑safe G‑code.

With Wand

With the RGB wand built, we enabled the camera with the current mask in order to verify the LEDs' visibility with the camera. However, we found an unexpected discovery: the LEDs are too highly saturated to be properly captured on camera.

This gave us the insight to implement high saturation visibility when tracking the drawing light, since it would properly accommodate the nature of the mask and the predetermined logic for the wand-PC-printer system.

Sending the Spell

We initially decided to use a Bambu Labs A1 to draw out the magician’s spell. With the enumerated and vectorized drawing data converted to a G-code over the grid of the build plate, we needed to verify that the G-code was functional. Using an “nc” file viewer, we verified that the drawing traces of a circle worked within the same space as a verified 3D print trace.

Challenges

The largest challenge proved to be transforming the code from vectors into motor directions without the need to manually send the data through. The source of this issue was Bambu Studio. While this software streamlines the 3D printing process from a step file straight to the machine, Bambu Labs machines cannot take any instructions outside of this slicer. Therefore, our program had to incorporate the software within our main code. After scanning through the settings, we found several settings that allowed it to not only receive instructions over wifi but also automatically run the G-code that was sent to it. However, upon sending the instruction, several errors occurred when attempting to communicate the files to the printer.

Since the A1 has its own on-board computer, the machine has several connection settings and protocols in place to ensure safe prints. Because we are sending a simple set of instructions without any model for it to derive, the A1 had trouble automatically running these directions, due to these safety guards. We were able to address this by choosing to send the files over rather than telling it to print. While this added a manual step to the print process, we decided to focus on the function first before calibrating this system.

This process worked successfully over the home network, accepting the received instructions and carrying them out exactly, without the need for a model file. This allows us to read directly the motions to be carried out and carried through with the press of a button. We would continue to test this the next day to see if the same process could be carried out over the UGA network.

On campus, the PAWS secure wifi is heavily restricted in terms of login, leaving us with few options for communicating with the printer. We attempted to use a personal hotspot to connect the devices, but the data traveling between the Mac to the tower to the Bambu cloud, and back to the printer proved to be too far of a path to reliably contact the printer. Subsequently, we attempted to use a “LAN only” method, using a separate laptop as a separate wifi antenna, but this method had several complications, including not being able to properly scan the printer. After dedicating several hours to learning how to communicate over the UGA network, we decided it was time to put up the Bambu Labs A1 for this purpose.

For the new printer, we decided to go with the Ender 3 Pro, due to its lower-level understanding of prints and direct micro USB connection to run the code.

Final Build

In the final iteration of our project, we incorporated a star-shaped tip onto the wand to enhance the "magical" aesthetic for the user. A precision-drilled aperture at the star’s apex houses the LEDs, ensuring the camera can clearly track the various RGB colors. Regarding the hardware modifications for the Ender 3 Pro, we mounted a custom pen holder to the print head and implemented a sliding mechanism to bypass the bed height sensor. This modification allows the pen to sit closer to the nozzle’s original position, resulting in significantly more accurate pen placement on the build plate.

On the software side, we deployed an OctoPrint server accessible via a web browser. This server receives raw G-code generated from the user's drawings and automatically uploads it to start the printing process. Notably, we moved away from using the Bambu AI printer in the final build; university Wi-Fi restrictions prevented a stable connection between our laptops and the printer, necessitating this shift in hardware.

On the electrical side, the breadboard was removed in place of soldering the circuits directly to each other. The RGB LED was positioned to be at the top of the wand for ease of visibility and design preference. The embedded software remained the same although we ended up using an adafruit qt py instead of an ESP32 for reduced mechanical profile. For power supply, we decided to use a regular phone with a usb-c port to provide the necessary power.

Frame Work / API

We built the project using public, open‑source tooling: OpenCV for real‑time computer vision and LED tracking, NumPy for image/array processing, and OctoPrint for printer control and G‑code upload/streaming, along with Python’s standard library for file I/O and networking. No proprietary frameworks were required; the stack is fully based on publicly available software.

Built With

Share this project:

Updates