Inspiration
Over 2.2 billion people worldwide live with vision impairment, and 43 million are completely blind (WHO, 2023). Yet most still rely on traditional aids like mobility sticks or guide dogs, which cannot detect overhead hazards or fast-moving dangers. Seeing these daily struggles inspired us to design VisionPath — a solution that merges everyday smartphones with AI and advanced sensors to give visually impaired individuals greater safety, confidence, and freedom.
What it does
VisionPath transforms a smartphone into a smart mobility assistant. Using the phone’s camera and LiDAR, it detects and classifies obstacles, measures their distance, and tracks moving hazards in real time. It then provides concise audio and haptic feedback to help users safely navigate. Beyond avoidance, VisionPath builds a dynamic 3D map of the environment, predicts collision risks, and guides users with contextual cues enabling safer, more confident mobility indoors and outdoors.
How we built it
We combined computer vision, LiDAR depth sensing, and on-device AI into a pipeline that continuously scans and interprets the environment. We trained object detection models to recognize common obstacles, integrated LiDAR for distance mapping, and fused sensor data to track motion. A lightweight audio feedback system was implemented using accessibility guidelines, with options for haptic devices like wristbands. The prototype runs entirely on smartphones, making it affordable and scalable.
Challenges we ran into
One of the hardest challenges was getting reliable performance out of smartphone hardware. While LiDAR and cameras are powerful, they have limited range and can struggle in low light or bright sunlight. Making the system robust across different environments (rain, crowded streets, cluttered rooms) was far harder than expected.
Another challenge was false positives vs. missed detections. If VisionPath over-alerts, users will get annoyed and stop trusting it; if it misses a moving cyclist, safety is at risk. Striking that balance required tough trade-offs.
Accomplishments that we're proud of
We are proud to have created a working prototype that integrates LiDAR, computer vision, and predictive modeling into a single app. VisionPath successfully detects both static and moving obstacles, issues timely alerts, and demonstrates the potential of smartphones as advanced assistive tools. Beyond the technical achievements, we’re proud of designing an inclusive solution that could meaningfully improve independence and quality of life for millions of visually impaired individuals worldwide.
What we learned
We learned that building effective assistive technology requires more than just technical accuracy — it demands empathy, simplicity, and trustworthiness. Through research and testing, we discovered the importance of multimodal feedback, the challenges of minimizing false alarms, and the value of predictive alerts over reactive warnings. Most importantly, we learned that scalable innovation in accessibility is possible using existing consumer hardware like smartphones.
What's next for VisionPath
Next, we aim to extend VisionPath with indoor navigation using Bluetooth beacons and QR-code markers, integrate wearable biosensors to adapt alerts based on user stress, and develop more advanced haptic feedback options for silent guidance. We also plan to partner with NGOs and accessibility organizations to bring VisionPath to communities globally, especially in regions where access to guide dogs or expensive devices is limited. Our long-term vision is to make VisionPath a comprehensive mobility platform, bridging smartphones, smart cities, and wearable technology.
Log in or sign up for Devpost to join the conversation.