Inspiration

We were inspired by the challenges faced by visually impaired individuals in navigating daily environments safely and independently. We wanted to create a tool that uses AI to give them real-time awareness through sound.

What it does

Blino detects people and objects through the camera and instantly provides voice feedback, describing what’s in front of the user. It helps users understand their surroundings without needing sight.

How we built it

We built Blino using React Native and TensorFlow.js, integrating COCO-SSD for real-time object detection and Expo Speech for voice feedback. The app runs smoothly on mobile devices and provides fast, low-latency responses.

Challenges we ran into

We faced issues with model accuracy for small or overlapping objects and struggled to balance speed with precision on mobile devices. Implementing stable voice feedback timing was also tricky.

Accomplishments that we're proud of

We successfully built a working prototype that detects people and objects in real time and converts this into clear audio feedback. We’re proud that Blino can make daily life more accessible for the visually impaired.

What we learned

We learned how to optimize TensorFlow models for mobile, manage camera permissions effectively, and enhance user experience through accessibility-focused design and speech integration.

What's next for Blino

We plan to expand Blino with GPS integration, obstacle distance estimation, and multilingual support. Our goal is to develop a fully deployable app that empowers visually impaired individuals worldwide.

Built With

Share this project:

Updates