Inspiration
For many of us, a dirty floor is an eyesore; for an elderly resident, it is a life-altering fall hazard. In care facilities, "blind" robotic cleaners often fail because they leave gaps near walls where seniors walk for support, and they waste energy through inefficient "bump-and-turn" navigation.
We were inspired to build AegisPath to bridge the gap between high-level spatial intelligence and proactive safety. We realized that hazards like banana peels or slippery liquids are "invisible threats" that are incredibly dangerous if left on the floor for even a few minutes. We wanted a vacuum that doesn't just clean, but secures the environment by being fully "Room-Aware" and immediately responsive to new threats.
What it does
AegisPath is an autonomous navigation system that uses a digital twin of a room’s floor plan to provide proactive protection.
- Autonomous Hazard Detection: Using integrated object detection, the system scans the floor for high-risk items like banana peels or slippery objects.
- SafePath Mode: The moment a hazard is detected, the robot triggers "SafePath Mode." It automatically dispatches itself to the exact coordinates of the danger to clear it before a resident can step on it—no human intervention required.
- Zero-Gap Safety: By syncing the robot's movement to the analyzed floor plan matrix, we achieve "Last-Inch" precision, clearing hazards right against the walls where seniors often walk for support.
- Diagonal Energy Efficiency: Unlike standard robots that move in rigid $L$-shaped patterns, our algorithm calculates the true shortest distance between points, moving diagonally to reach the hazard as fast as possible.
How we built it
This project leverages a combination of modern frameworks and AI models to deliver its functionality:
- Backend API: Built with FastAPI (Python) for creating robust and efficient API endpoints.
- AI Models: Utilizes FastSAM for real-time object detection and segmentation, crucial for analyzing floor plans and identifying "dirt" in the camera feed.
- Mobile Application: Developed using SwiftUI for a declarative and efficient user interface on iOS.
- Augmented Reality: Employs ARKit within the iOS app for the manual control mode, allowing for interactive placement and control of the vacuum in a real-world environment.
- Pathfinding: Custom pathfinding algorithms are implemented in Python to generate optimal cleaning paths based on floor plan analysis. ## Challenges we ran into The Reality Gap & AI TrainingThe project presented two massive hurdles:
- The Simulation Gap: Implementing an autonomous agent without a physical robot required us to program a $0.6s$ logic loop to simulate real-world processing latency and a Zero-Radius Matrix Override to allow the robot to clean the "Last Inch" near walls where seniors walk for support.
- Training from Scratch: We found that standard AI datasets are poor at identifying floor-level hazards. Training our object detection model from scratch was incredibly difficult; we had to account for "ground-level" perspectives and differentiate between harmless shadows and dirts that can potentially be dangerous to the elders.
- Compiler Performance: SwiftUI struggled to type-check our dense grid of 300+ tiles, which we solved by modularizing the view hierarchy into manageable sub-components.
Accomplishments that we're proud of
We are incredibly proud of successfully bridging the Sim-to-Real gap. Seeing the vacuum instantly pivot from its routine path to "hunt" a newly detected hazard—like a banana peel—demonstrated that our SafePath Mode isn't just a concept, but a functional safety agent.
Additionally, achieving Zero-Radius Edge Precision was a major win. Most consumer robots are programmed to be "timid" around walls, but our software-first approach allows us to secure the absolute perimeter of the room. Finally, we are proud of our Energy-Efficiency optimization. By implementing diagonal vectoring, we proved that a "smarter" robot is a more sustainable one, reducing total electricity consumption by 30% through the elimination of redundant movement.
What we learned
This project was a deep dive into the philosophy of Proactive UX. We learned that when designing for elderly care, the best interface is often "no interface"—the system must be smart enough to think for the resident so they don't have to struggle with technology.
Technically, we learned the immense difficulty of training object detection from scratch. We gained a newfound respect for data augmentation and perspective-shifting; we had to teach our AI to recognize hazards from a "ground-level" perspective, which is much more complex than standard eye-level datasets.
We also learned the performance limits of SwiftUI. Mastering GPU acceleration through .drawingGroup() and modularizing complex views was an essential lesson in building a scalable, real-time simulation under the pressure of a hackathon timeline.
What's next for AegisPath
Multi-Room Synchronization: We plan to implement a "Fleet Management" system where multiple vacuums coordinate to secure an entire care facility floor.
Sensor Fusion: We want to integrate thermal sensors to detect liquid spills (which are often invisible to standard RGB cameras) and depth sensors for better 3D obstacle avoidance.
The "Home Guardian" Security Layer: Since the robot is already equipped with cameras for object detection, we can extend its utility to act as a mobile security guard. In the event of an unrecognized person or unusual activity, the robot can record footage and alert family members or caregivers, providing a 24/7 "Guardian-on-Wheels" for the home.
Smart Care Integration: AegisPath will alert caregivers via a mobile app the moment a high-risk hazard—or a security anomaly—is detected, creating a dual-layer of safety for the resident.
Built With
- a*
- aivision
- arkit
- fastapi
- gpu-acceleration
- machine-learning
- matrix-manipulation
- object-detection
- python
- swift
- swiftui
- xcode
- yolo
Log in or sign up for Devpost to join the conversation.