Inspiration

Our inspiration came from observing how many gooners existed in this hackathon. As responsible humans, we realized that one day these gooners would eventually goon into the wrong hole. With that in mind, we hope to prepare gooners for parenthood. This inspired us to design Goon Baby, an AI-powered robotic “child” that mimics unpredictable, curiosity-driven behavior. The idea is to create a platform that helps future parents or caregivers learn to manage spontaneous and sometimes chaotic situations through hands-on interaction.

What it does

Goon Baby is an AI-controlled robot powered by a locally hosted large language model. Using a Raspberry Pi camera, OpenCV, and YOLO, the robot detects and interprets objects in its environment. The system converts visual input into text-based descriptions and sends them to the local LLM, which decides how the robot should respond — for example, moving toward objects of interest. This process allows the robot to act autonomously, simulating a curious child exploring its surroundings.

How we built it

We used Ollama to host Gemma 3 (12B parameters) locally, allowing for on-device inference without relying on cloud services. The Raspberry Pi handles live video streaming from the Pi Cam, which is processed with OpenCV and YOLO for object detection. The interpreted frames are described in natural language and passed to the LLM, which then sends movement commands back to the robot. This approach is unique because the robot’s decision-making is driven entirely by AI reasoning rather than hardcoded instructions.

Challenges we ran into

One of the biggest challenges was enabling the vision system to translate object positions into meaningful descriptions that the LLM could understand. To solve this, we divided the frame into coordinate zones corresponding to spatial regions and generated structured textual prompts for Gemma. Achieving reliable communication between the AI model, the Pi, and the control motors also required careful synchronization and prompt tuning.

Accomplishments that we're proud of

We’re incredibly proud of achieving full AI-driven control of our robot. Every movement and decision originates from the LLM, which interprets visual data and determines appropriate actions in real time. This demonstrates an exciting new approach to robotics where models like Gemma can directly guide physical devices without intermediary logic layers.

What’s next for Goon Baby

Our next goal is to make Goon Baby more interactive and adaptive. We plan to integrate voice communication so users can verbally guide or correct the robot’s behavior. Over time, the system would learn from reinforcement — for example, remembering to avoid certain objects after repeated corrections. This would make Goon Baby a continuously learning, personality-driven AI companion that evolves through human interaction.

Built With

Share this project:

Updates