Inspiration
Navigator addresses challenges posed by cognitive degenerative diseases like Alzheimer's and dementia. These conditions affect millions globally, including ~6.9 million Americans age 65+, potentially doubling by 2060. Difficulty recognizing faces and objects causes confusion, anxiety, reduced independence, and significant caregiver burden. Over 11 million unpaid caregivers provided ~18.4 billion hours of care in 2023 (valued at ~$346 billion). Alzheimer's is progressive and a leading cause of death for older adults. Navigator aims to meet the need for tools supporting daily living, safety, and independence.
What it does
Navigator targets the difficulty individuals with cognitive decline face in recognizing familiar faces and objects. It uses augmented reality (AR) and object/facial recognition to potentially improve quality of life. The app uses the smartphone camera to identify pre-registered faces/objects, overlaying clear text labels (e.g., "Grant - Son", "Kettle") onto the live view. This aims to reduce cognitive load, provide immediate context, and support social connection. Furthermore, by providing cognitive stimulation through interaction with the technology, it may help maintain cognitive function.
How we built it
The hackathon Minimum Viable Product (MVP) was built as a progressive web app using Javascript, enabling rapid development. It employs AR to overlay information and uses image/facial recognition. User experience (UX) design was critical due to the target users' cognitive impairment. Key principles included a minimalist interface, high-contrast/large fonts for readability, predictable layout consistency, and passive information display upon recognition. These choices aimed for maximum accessibility and demonstrated feasibility within the hackathon context.
Challenges we ran into
Current limitations include being phone-based (not hands-free), having a restricted recognition library, and requiring a controlled demo environment. Refining recognition accuracy and performance is an ongoing challenge. There was a desire to integrate more sophisticated machine learning for broader default object recognition. Significant ethical considerations regarding user autonomy, privacy, and preventing distress also required careful thought.
Accomplishments that we're proud of
We successfully built a functional Javascript MVP demonstrating Navigator's core concept: real-time camera feed, recognition, and AR overlays. We implemented a user-centric, accessible design and applied AR innovatively for cognitive assistance. The project addresses a significant societal need. Research suggests AR tools providing cognitive engagement may help maintain function or slow decline, adding another layer of potential impact.
What we learned
We gained practical experience implementing basic AR and real-time image/facial recognition within a mobile web framework, navigating the available tools and libraries. We learned the necessity of proactively addressing ethical dimensions like user autonomy, data privacy, and potential distress when creating assistive technologies.
What's next for Navigator
The long-term vision involves transitioning to AR Glasses (leveraging our Blender designs) for a hands-free, natural information overlay. Future development includes:
- Expanding ML libraries for baseline object/location recognition.
- Offering Navigator to institutions (e.g., nursing homes) with staff databases.
- Adding time-based medication reminders with simple head-gesture interaction (nod/shake).
- Developing a wearer database for caregivers to track routine information.
- Conducting thorough user testing with people with dementia and caregivers.
- Integrating simple cognitive exercises or puzzles.
References:
Built With
- api
- blender
- figma
- javascript

Log in or sign up for Devpost to join the conversation.