Inspiration
As earth's limited natural minerals continue to be used, researchers at institutions like MIT have found validity in asteroid mining to harness resources contained by over 8% of the asteroid belt. Alternatively, with revolutionary discoveries of ice deposits in Mars, the unforeseen obstacle of lava tube caves trapping this water will pose difficult to overcome. These are just two examples sharing a rising topic in the future of space exploration; cave navigation. Our robot (the Cosmic Collector) is inspired by the theme of resources trapped underground, and seeks to map and track cave data.
Asteroid Mining Article: link Martian Ice Caves Article: link
What it does
Utilizing a mecanum drive chassis, IMU, camera, LIDAR sensor, and a flagging system for retrieval, the Cosmic Collector is to be deployed in space caves, and autonomously navigate its way through walls. At the base of the bot is a mecanum drivetrain powered by I298N dual bridge motor drivers, delivering holonomic movement for nonlinear cave environments. Atop the chassis lies the electronic heart of the Collector. We have our bread board, Raspberry Pi, and camera at the front. This camera is intended to scan the color of rocks ahead of it, with discrepancies being marked with the physical "cones", located behind the Raspberry Pi. This unique method uses a servo to incrementally drop cones, acting as breadcrumbs for future robots or astronauts to track ores. Atop the robot is the LIDAR sensor, which is the main guiding sensor of the robot, influencing it to dodge walls of caves, as displayed in our videos.
How we built it
Our team is uniquely comprised of 2 CS Majors and 2 ME Majors. To divvy up tasks for maximum efficiency, we decided to implement a "waterfall" working method (the MEs would prioritize parts at Software's request). We went from a simple chassis, to installing the bread board, and finalizing with external mechanism like the LIDAR and marker dropper. Realizing that this step-by-step process could lead us to regret early parts, we ensured modularity through parametric CAD design and splitting up large prints into decomposable pieces.
Challenges we ran into
At the end of our first workday (Friday), the main body 3D print failed. This put a knife in our timeline, as about 10 hours of printing time was lost. However, we looked to adjust rather than panic, optimizing parts through clever pocketing and reducing print time by about 2 hours. ~8 hours before the hackathon deadline, our esp32 overheated, and having no replacements, our robot was rendered out for the remainder of the hackathon. Restlessly, we decided to persevere without a second thought, filming the video and replicating robot functions as much as possible.
Accomplishments that we're proud of
We wrote custom firmware for our ESP-32, offloading hardware control from our Raspberry Pi. Our real-time operating system on our ESP-32 ensured deterministic execution of time critical operations. On our Raspberry Pi, we ran a custom pathing algorithm that finds the local maximums of the graph of LIDAR data (corresponding to a lack of obstacles), adjusts our heading to face a selected path, and dynamically updates in real-time.
Additionally, we created an adapter between the chassis kit and our electronics board, allowing easy reproducibility in our design. Through modifying just a few values in our document, users can create their own custom adapter for their unique chassis kit.
What we learned
On the software side, we gained deep insights into reactive navigation architectures within ROS 2. While our chassis is holonomic, we learned that a purely reactive decision tree—prioritizing immediate obstacle avoidance over complex global path planning—was significantly more robust for tight cave environments. By implementing our custom scan_to_nav node in C++, we learned how to mathematically partition 360-degree LIDAR data into relative angular sectors to make real-time steering decisions.
We also learned the importance of safety-critical programming and input sanitization. We had to implement a software watchdog to automatically halt motors if sensor data proved stale, and we developed robust filtering functions to handle NaN and infinite values from the LIDAR, ensuring the robot didn't behave erratically when facing open spaces.
Losing our main chassis print with only one day left taught us the critical value of modular design principles. We pivoted from a single-body design to a multi-part assembly, which allowed us to isolate failures. Instead of risking another 10-hour print, we could rapidly test and reprint individual sections in parallel. This modular approach not only saved our timeline but made the robot easier to repair and upgrade on the fly.
What's next for Cosmic Collector
If more time was allotted, the 845 Cosmic Collective Upstate would develop the usage of the cone dropping mechanism. Rather than dropping cones at a set interval, we originally sought to drop cones whenever the Collector hit a corner (minimizing cones dropped). Further, a cherry on top would be to add April tags to the cones, displaying a compass to the next marker for easy navigation.

Log in or sign up for Devpost to join the conversation.