Inspiration Navigating everyday spaces is still difficult for people with visual impairments. Simple things like unexpected obstacles, stairs, or objects left in the way can become safety risks. We wanted to build something practical that works in real time and uses a device people already carry—their phone—to make movement safer and more confident.

What it does Navecho uses the phone’s camera to detect obstacles in front of the user and describes them through audio feedback. It helps users understand what’s ahead—like people, walls, steps, or objects—so they can move forward with more awareness and fewer surprises.

How we built it We built Navecho as a mobile app using React Native for the main interface and logic. On the Android side, we used native components to access the camera and process frames efficiently. Computer vision and on-device inference are used to identify obstacles, and the results are sent back to the app to be converted into clear, simple audio cues.

Challenges we ran into Real-time detection was the biggest challenge. Processing camera frames fast enough without draining the battery or overheating the phone took a lot of tuning. Another challenge was deciding what information actually matters to the user—too much detail can be overwhelming, so we had to keep feedback short and useful.

Accomplishments that we're proud of We’re proud that Navecho works in real time and runs directly on a mobile device without needing external hardware. We also designed it with accessibility first, focusing on audio feedback and simple interactions rather than visuals.

What we learned We learned that accessibility is not about adding features—it’s about removing friction. Small decisions, like wording of audio prompts or timing of alerts, make a huge difference. We also gained hands-on experience bridging React Native with native Android code for performance-critical tasks.

What's next for Navecho Next, we want to improve detection accuracy in complex environments and add better distance and direction awareness. We also plan to test with real users, refine the feedback based on their needs, and expand support to more platforms and use cases.

Built With

Share this project:

Updates