Blindness affects over 43 million people globally. Safe navigation is a constant challenge for the visually impaired, yet most solutions today are either bulky, expensive, or inaccessible to those who need them most. We were inspired by a simple question: How can we create an affordable, intelligent, and lightweight navigation aid that helps a visually impaired person's daily life? For this reason, we created Smart Cane. The goal behind Smart Cane is to provide real-time obstacle detection, safe route guidance, and emergency support for all under $40, making advanced mobility assistance truly accessible.
Smart Cane transforms a traditional cane into an intelligent navigation system powered by AI and Bluetooth. It detects obstacles within a five-foot radius using a lightweight model and determines whether the user should move left, right, or forward based on the clearest available path. This information is sent via Bluetooth directly to the user's smartphone, where it is spoken aloud through voice guidance. In case of emergencies like falls, the companion app automatically alerts emergency contacts and sends the user’s real-time location. The app also displays the user's position on a map, helping responders quickly locate them if needed. Together, these features offer affordable, real-time mobility support for the visually impaired.
For the hardware, we used a Raspberry Pi 3b connected to a basic webcam to continuously scan the user's environment. We integrated a TensorFlow Lite object detection model optimized for lightweight devices and processed the video feed in grayscale to reduce computer load. The screen was divided into left, center, and right zones, and the device calculated obstacle density in each zone to decide on the safest path. Bluetooth communication was established between the Pi and the smartphone, allowing real-time transmission of navigation guidance.
On the mobile side, we built the app using React Native with Expo. We incorporated Firebase Authentication to handle login, signup, and password recovery, and designed an interface for users to add emergency contacts easily. We used GeoLocation and react-native-maps to track and display the user’s live position. Furthermore, we used Twilio for SMS messaging. The entire app was designed to be lightweight and accessible to visually impaired users who rely on clarity and a simple interface.
One of the major challenges we faced was optimizing the Raspberry Pi’s performance. Running real-time object detection on a low-power device caused significant lag when using full-color video streams. To solve this, we converted frames to grayscale, reduced the camera resolution, and carefully controlled the frequency of AI inferences to maintain responsiveness while still detecting obstacles accurately. Another challenge was setting up reliable Bluetooth communication between the Raspberry Pi and the mobile device. Managing connections, handling potential disconnects, and ensuring low-latency message delivery required extensive troubleshooting and testing. On the app development side, building a smooth and intuitive user experience within Expo’s limitations was a learning curve. We had to integrate real-time location tracking, secure user authentication, emergency contact management, and create a clean design, all while working within tight time constraints. Additionally, coordinating hardware, AI model, and mobile software into one system was a complex task, requiring lots of debugging and fast iteration.
We are proud that we successfully built a functional AI-powered navigation device for under $40. Achieving real-time obstacle detection, decision-making, and Bluetooth transmission on a low-resource device like the Raspberry Pi was a major accomplishment. We are also proud that we built a fully working mobile application that supports user authentication, emergency contact management, real-time location tracking, and a user-friendly interface. Bringing together the hardware and mobile sides into a complete system that could truly help visually impaired users was an accomplishment that made all the obstacles worth it.
Throughout this project, we learned how to deploy and optimize TensorFlow Lite models for real-time inference on limited hardware like the Raspberry Pi. We gained experience in setting up reliable Bluetooth communication between embedded systems and smartphones. On the app side, we became much more familiar with React Native, Expo, Firebase Authentication, and real-time location tracking.
We plan to improve the AI obstacle detection by training a custom model specifically focused on detecting common hazards like stairs, curbs, and potholes. Finally, we aim to conduct broader user testing with visually impaired individuals to perfect the Smart Cane experience based on real-world feedback. The ultimate goal of making this technology was to help communities and individuals in need.
Log in or sign up for Devpost to join the conversation.