Inspiration

Memora began with one of our team members watching her grandmother navigate life with dementia, and realizing that what she was losing wasn't just memory. It was routine, identity, and the feeling of being safe in her own world. That personal experience opened a door into a much larger crisis in BC's nonprofit senior homes, and we built Memora to give back to the sector that needs it most.

What it does

Memora is an AI-powered AR companion that guides dementia patients through their daily routines via voice, reminding them where they are, who their family members are, and what comes next in their day, while giving caregivers a simple setup through a Calendar, so patients keep their identity, families get peace of mind, and caregivers can focus on the moments only they can provide.

How we built it

Memora AI was built as a full-stack, voice-first application using a React (Vite) frontend and a FastAPI backend. The frontend handles user interaction, including voice input through the browser’s Web Speech API and location data via the Geolocation API, then sends structured requests to the backend. On the backend, we used LangGraph to orchestrate intent-based workflows, allowing the system to classify user input (e.g., routine, memory recall, weather, calming support) and route it to the appropriate service. LLM responses are generated through the OpenAI SDK via OpenRouter, while personalized memory recall is powered by FAISS vector search combined with sentence-transformers embeddings. For real-world context, we integrated external APIs like OpenStreetMap Nominatim for reverse geocoding and Open-Meteo for live weather data. Voice responses are generated using ElevenLabs text-to-speech and streamed back to the frontend for playback. Data such as tasks, routines, and known people are stored in lightweight JSON files, enabling quick iteration without a full database, while a FAISS index supports efficient semantic retrieval for memory features.

Challenges we ran into

One of the most grounding moments of this project came when we spoke to a real nurse who works with dementia patients daily. She immediately pushed back on our early designs, pointing out that no matter how intuitive we thought the interface was, anything requiring a tap, a swipe, or a login would be a barrier for this population. That single conversation fundamentally reshaped how we thought about Memora. It's why the patient interface became entirely voice-driven, and why all complexity was moved to the caregiver backend. Building for the most vulnerable user in the room forced us to simplify in ways we hadn't anticipated.

Accomplishments that we're proud of

On the technical side, we're proud of what we were able to pull together under real time pressure. One of our biggest achievements was successfully integrating vastly different backend routes into one seamless experience- because asking Memora "what's the weather today?" and asking "who is this person?" are completely different methods under the hood, requiring different logic, different data sources, and different response structures. Getting those to work together in a single, natural conversation flow without ever exposing the backend complexity to the user is something we are very proud of.

We also integrated an LLM to generate natural, contextually aware responses, and paired it with ElevenLabs for voice output, which gave Memora something most care tools completely overlook: a voice that actually sounds warm and human. For a product designed for dementia patients who rely on familiarity and calm, that wasn't a nice-to-have. It was essential.

Overall, we’re extremely proud of what our technology could mean. This started with a grandmother. And somewhere in BC right now, there are 85,000 people just like her, losing their routines, their confidence, and their sense of self, and 50,000 caregivers and family members absorbing that loss alongside them. Building something that could genuinely reach those people, in their own language, feels very special to us.

What we learned

The biggest thing we learned is that the best technology is seamless, and “disappears”. After selecting our target segment that we were building for, we learned that what we had to build was something invisible. Something that slips into a dementia patient's morning so naturally that they don't experience it as technology at all. They just experience their day. Getting to that required us to unlearn a lot of our instincts as builders, because every time we added a feature that felt impressive, we had to ask ourselves- would an 80-year-old with dementia experience this as helpful, or as noise? That filter made us better designers that are actually building something that would alleviate the burden on caregivers.

What's next for Memora

Our immediate next step is a pilot with a small, real group of early-stage dementia patients at the Arbutus Shaughnessy Kerrisdale Friendship Society for Seniors, a nonprofit senior home right here in the community we built this for. We want to measure what actually changes. Does routine adherence improve? Does caregiver-reported stress go down?

On the product side, we have two clear near-term priorities. First, a full UI upgrade. Our current prototype is functional, but we know the interface that dementia patients and caregivers interact with daily needs to be warmer, simpler, and more intuitive. We've already designed what that next version looks like, and we're excited to show you where it's going. The goal is an interface so calm and familiar that patients don't experience it as technology at all. They just experience their day.

Second, we're building toward a full AR tracker integration. Using device-based GPS and AR overlays, Memora will be able to detect when a patient has drifted from their expected routine location and gently guide them back with simple visual pathway prompts directly on their screen. Not surveillance, orientation. The difference is that Memora doesn't alert a caregiver every time a patient moves. It quietly helps the patient find their way first, preserving their independence before escalating to human support.

But our north star metric isn't a number. It's our team member’s grandmother. Everything we build next will be measured against one question, can Memora help her hold onto her routine, her independence, and her sense of self a little longer? If the answer is yes, we know we're building the right thing.

Share this project:

Updates