Inspiration

Visually impaired students navigate spaces differently; through landmarks, sound cues, and spatial awareness rather than visual signs or maps. Yet most campus navigation tools are built assuming users can see. We were inspired to rethink navigation from an accessibility-first perspective. Instead of adapting sight-based systems, we wanted to design a solution grounded in how visually impaired users actually move through buildings, prioritizing independence, safety, and reliability across campus.

What it does

AccessU is a hybrid indoor–outdoor navigation system designed for visually impaired students. It allows users to: Navigate between campus buildings Travel through indoor pedways Transition outdoors when necessary Receive full voice-based navigation Detect obstacles in real time using the camera The system intelligently switches between indoor landmark routing and outdoor GPS guidance.

How we built it

AccessU is a voice-first Android app for blind and low-vision users at the University of Alberta. The app lets users: Voice navigation – Speak current location and destination (e.g. “CCIS to SUB”) using Android SpeechRecognizer. Obstacle detection – Use the camera with ML Kit Object Detection and TTS to give “Obstacle ahead, move left/right” or “Clear path ahead.” Accessible UI – Jetpack Compose screens with high contrast (gold/green), large text. Campus locations – Support for buildings like CCIS, ETLC, SUB, CAB, Tory, and bus stops, with variants for STT mishearings. Flow control – Listen–beep–record pattern and automatic retries with “Sorry, I couldn’t catch you” when speech fails. The stack includes Kotlin, Jetpack Compose, CameraX, ML Kit, Android TTS/STT, and Kotlin Coroutines.

Challenges we ran into

ML model integration: Originally planned TFLite depth (MiDaS) or object detection (YOLOv8). Switched to ML Kit Object Detection because it was easier to integrate and met our needs. The tradeoff is we still don’t have distance (e.g. “2 metres ahead”). Obstacle detection tuning: We had to tune confidence thresholds and center-path zones so alerts weren’t too noisy or too sparse, and we added throttling so TTS doesn’t repeat too often.

Accomplishments that we're proud of

Voice-first design: The app can be used without looking at the screen. Location and destination are chosen by voice, and TTS guides users through navigation and obstacle warnings. Real-time obstacle detection: Camera + ML Kit detect people, chairs, furniture, plants, etc. and speak directional guidance (“Obstacle ahead, move left/right”) so users can avoid obstacles while walking. Robust location parsing: Campus locations (CCIS, ETLC, NREF, CAB, SAB, SUB, Tory, bus stops, University Commons) work with spelled abbreviations (“c c i s”), common mishearings, and wrapper phrases (“I want to go to CCIS”). Context-aware modes: Walking vs. On bus mode controls when the camera runs and when obstacle messages are spoken, so the camera and TTS stop when the user is on the bus. End-to-end integration: Campus routing, obstacle detection, AudioGuide TTS, and navigation flow are wired together and working on device.

What we learned

Pragmatic tradeoffs: ML Kit was faster to ship than TFLite depth models, but we lost distance estimation. We learned to choose tools that get a working baseline in time. Accessibility is integration: It’s about how voice I/O, obstacle detection, and modes work together, not a single feature. Clear interfaces help: Using shared APIs like AudioGuide.speak() and ObstacleIntegration.isActive made it easier to integrate each person’s work.

What's next for RouteCause

Distance estimation: Add depth (e.g. MiDaS) or another model to output “Obstacle 2 metres ahead, move right” instead of only “Obstacle ahead, move right.” ETS transit integration: Use ETS GTFS and real-time APIs for bus routes and “Your stop is next” announcements when the user is on the bus. Weather-aware guidance: Integrate a weather API and warn about ice or adverse conditions on walking routes. Expanded coverage: Add more U of A buildings, indoor wayfinding, and broader Edmonton transit coverage. User testing with blind/low-vision users: Validate and refine the experience with real users and accessibility groups.

Built With

Share this project:

Updates