Inspiration
Long checkout lines, crowded stores, and the lack of tech integration in physical retail inspired us to create a smarter, smoother shopping experience—one where the cart does the work for you.
What it does
ShopShadow is a smart shopping cart that follows the user via phone tap, scans items using a smart camera, and bills them directly through a connected mobile app for a fully contactless experience.
How we built it
We used a 4-DC motor and ultrasonic sensor setup with an Arduino Uno on a chassis to simulate the cart. Originally planned with GPS, we switched to an HM-10 Bluetooth module and triangulated RSSI signals to track the user’s phone. For item detection, we used a Luxonis 12MP neural processing camera (1.2 TFLOPS), trained a custom YOLOv8 nano PyTorch model on 17,000 grocery images across 200+ classes, achieving 99% return rates at 86% confidence. The camera wirelessly sends data to a Swift + Flask backend app, which identifies the item and bills the user in real time.
Challenges we ran into
As with any hackathon, we faced a multitude of challenges. The toughest was getting the ultrasonic object avoidance system to work in sync with the Bluetooth-based tracking. Just building a functional robot car—soldering, wiring, and writing stable C++ code for the Arduino—was a challenge on its own. We had to create logic gates for the ultrasonic sensors and develop a triangulation algorithm using Bluetooth RSSI values. Balancing navigation was tricky—we built a weighted bias system to prioritize obstacle avoidance while still tracking the user’s direction when taking detours.
On the vision side, formatting raw output from the Luxonis camera proved difficult. It outputs in BLOB format, so we had to convert our PyTorch model to ONNX, then to BLOB, and wrap it inside a YOLOv8 nano structure to optimize the layer handling. Integrating this with the backend app added another layer of complexity due to data formatting and real-time transfer issues.
Accomplishments that we're proud of
We’re incredibly proud that we were able to build a fully functional smart shopping cart prototype within a limited hackathon timeframe. Despite multiple hardware and software roadblocks, we successfully integrated object detection, autonomous following, and mobile payment into one cohesive system. Getting the robot car to move reliably using 4 DC motors and ultrasonic sensors took hours of soldering, wiring, and testing. But seeing it finally respond to logic gates and real-world obstacles was a huge milestone.
We’re also proud of our custom YOLOv8 nano model, which we trained on 17,000 images of grocery items across 200+ classes. Achieving 99% return rates at an 86% confidence level—on a model optimized to run efficiently on the Luxonis camera’s 1.2 TFLOPS hardware—was no small feat. Converting the model from PyTorch to ONNX to BLOB, and still getting great performance, proved our attention to optimization really paid off.
Finally, we managed to tie everything together using a Swift + Flask backend app, with wireless data streaming from the Luxonis camera to the phone. It was a true full-stack integration of robotics, AI, and user experience—and seeing it all work in harmony was a proud moment for the entire team.
What we learned
We learned a lot about system integration, especially when juggling low-level robotics and high-level app development. On the hardware side, we deepened our understanding of how to control DC motors and ultrasonic sensors through Arduino C++, and how to build logic gates and bias functions to prioritize obstacle avoidance over target tracking without completely sacrificing directionality. The process of triangulating Bluetooth RSSI signals for user tracking—especially without access to a GPS module—taught us creative problem-solving under pressure.
On the AI front, we gained valuable experience in curating and training a large image dataset, optimizing a neural network model, and deploying it to a specialized edge device. Working with the Luxonis camera’s BLOB format and finding ways to convert and wrap our model so that it retained both accuracy and speed pushed our understanding of machine learning deployment in real-world systems.
Finally, we learned how to coordinate hardware, AI, and app communication over a wireless network in real time—a challenging but incredibly rewarding experience that made us better engineers and collaborators.
What's next for ShopShadow
Moving forward, we plan to add a weight sensor system to validate the detected grocery items, minimizing false positives and improving billing accuracy. We also want to improve the pathfinding logic by integrating smarter obstacle detour algorithms that still bias toward the direction of the user’s phone. Our current weighted bias system works well, but with more time, we’d like to make it adaptive based on environmental complexity.
For the camera, we’re exploring more efficient ways to reduce backend latency—possibly through local edge processing for basic classification, with final billing verification handled in the app. On the user experience side, we aim to expand the app’s capabilities to include store maps, item recommendations, and voice-based commands so the cart can respond more intuitively in-store.
Ultimately, we’d love to pilot ShopShadow in a controlled retail environment—whether that’s a small local store or a campus pop-up—to gather real-world feedback and test how our system scales with more users, more items, and real shopping chaos.
Log in or sign up for Devpost to join the conversation.