Inspiration
Every 3 seconds, someone in the world develops dementia. For the 55 million people living with it today, memories of the people they love, the lives they built, and the moments that defined them slowly fade — and there's no getting them back.
But reminiscence therapy — the practice of guided, structured conversations around personal memories — has been shown to improve mood, cognitive engagement, and quality of life in dementia patients. The problem? It relies entirely on trained therapists or overburdened caregivers delivering it manually, inconsistently, and without any way to track what's working.
I asked myself: what if I could deliver personalized reminiscence therapy on demand, adapt it in real time to a patient's emotional state, and give clinicians actual data on cognitive trends over time? That's Resona.
What it does
Resona AI is an adaptive memory reinforcement platform for dementia patients, built on three layers:
For Caregivers — Structured Memory Profiles Caregivers build a patient's "memory graph" — organized across life domains like childhood, family, career, and milestones. They tag memories by emotional significance, time period, and mark sensitive topics (deceased loved ones, traumatic events) as off-limits. This gives the AI a rich, safe knowledge base to draw from.
For Patients — Guided Therapy Sessions Patients interact with a conversational AI memory companion that guides them through personalized reminiscence prompts grounded in their memory graph. The companion doesn't impersonate anyone — it speaks warmly and naturally while weaving in specific, personal memory cues ("You've mentioned that summer at Lake Tahoe with David in '87 — what do you remember about that trip?").
Behind the scenes, two systems run simultaneously:
- Real-time emotion detection via webcam tracks facial expressions throughout the session. If the patient shows signs of distress or confusion, the AI pivots to a pre-tagged "safe" anchor memory to re-ground them.
- A spaced repetition engine schedules which memories to reintroduce across sessions at increasing intervals — the same cognitive science behind flashcard apps like Anki, applied to personal memory reinforcement.
For Clinicians — Cognitive Analytics Dashboard After each session, Resona generates a detailed report: emotional response timelines, which memory domains the patient engaged with vs. went blank on, distress events and how the AI handled them, and most importantly — longitudinal cognitive trend tracking across sessions. Clinicians get exportable, actionable data they can bring into care planning.
How I built it
- Frontend: Next.js and Tailwind CSS for a clean, accessible interface across the caregiver setup, patient session, and clinician dashboard views
- Conversational AI: Claude API (Anthropic) with a deeply engineered system prompt that ingests the patient's memory graph, respects sensitive topic boundaries, follows spaced repetition scheduling, and includes strict behavioral guardrails for safe interaction with vulnerable users
- Emotion Detection: face-api.js running client-side for real-time facial expression classification via webcam, feeding back into the conversation engine to trigger topic pivots when distress is detected
- Speech Pipeline: Deepgram for real-time speech-to-text transcription, ElevenLabs for natural text-to-speech output
- Backend: FastAPI (Python) orchestrating session logic, the spaced repetition scheduler, memory graph storage, and analytics generation
- Database: Supabase for authentication, patient profile storage, session logs, and analytics data
Challenges I ran into
Prompt engineering for a vulnerable population was far harder than expected. A normal chatbot can afford to hallucinate or go off-script — mine can't. Telling a dementia patient something false about their own life, or accidentally referencing a deceased spouse in present tense, could cause real distress. I went through dozens of iterations on my system prompt to build in hard guardrails: strict adherence to the memory graph, graceful deflection when the patient mentions something outside the knowledge base, and immediate pivots when emotion detection flags distress.
Calibrating the emotion detection sensitivity was another balancing act. Too sensitive and the AI interrupts natural conversation every time a patient pauses or looks confused (which is frequent). Too loose and it misses genuine distress. I landed on a threshold system that looks for sustained negative expression patterns over a window rather than reacting to single frames.
Finally, making spaced repetition feel natural in conversation rather than like a clinical drill took real design work. Nobody wants to feel like they're being tested.
Accomplishments that I'm proud of
I'm proud that this isn't a toy. Reminiscence therapy and spaced repetition are both evidence-backed interventions — I didn't invent a new therapy, I made an existing one scalable, adaptive, and measurable. The clinician dashboard alone fills a gap that doesn't really exist in current dementia care tooling: longitudinal, per-patient cognitive engagement data generated passively from therapy sessions.
I'm also proud of the safety architecture. The sensitive topic avoidance system, the distress-triggered pivots, the explicit choice to not impersonate loved ones — these are design decisions that show I understand who I'm building for.
What I learned
Building for vulnerable populations changes everything about how you approach AI product design. Every default assumption about chatbot behavior — be engaging, ask follow-up questions, be curious — has to be re-examined when your user might not understand that they're talking to an AI, or might become distressed by a memory they can't fully access.
I also learned that the most impressive technical feature isn't always the most impactful one. Voice cloning would have been a flashy demo — but the spaced repetition engine and the clinician dashboard are what make this clinically useful. I chose substance over spectacle.
What's next for Resona AI
- Familiar voice integration: Allowing caregivers to upload voice samples from loved ones so the memory companion can speak in a voice the patient recognizes — with full consent workflows and ethical safeguards
- Multi-language support: Dementia patients often revert to their first language as the condition progresses. Supporting multilingual sessions is critical for real-world adoption
- EHR integration: Connecting the clinician dashboard to electronic health record systems so cognitive trend data flows directly into a patient's medical file
- Caregiver wellness layer: Caregivers burn out. I want to add check-ins and resource surfacing for the people behind the scenes who are doing the hardest work
- Clinical validation: Partnering with memory care facilities to run pilot studies measuring Resona's impact on cognitive engagement and quality of life outcomes against standard reminiscence therapy protocols
Built With
- claude
- face-api.js
- fastapi
- next.js
- react.js
- vercel
Log in or sign up for Devpost to join the conversation.