Inspiration
My grandmother used to hide her jewelry in the freezer — not because she was eccentric, but because Alzheimer's had rewritten the logic of her world. She'd lose her glasses, then lose the morning, then lose the day. The hardest part wasn't the forgetting. It was watching her know she was forgetting.
We built Vigil for her, and for the (5.8\text{M}) Americans living with Alzheimer's who deserve more than a reminder app.
What It Does
Vigil is an AI caretaker that sees the world through the patient's camera and acts as their second memory:
- "Where are my keys?" (\rightarrow) Vigil saw them on the kitchen counter (10) minutes ago
- Medication overdue? (\rightarrow) A gentle nudge, not a lecture
- Doctor at (2\text{:}00) PM? (\rightarrow) Reminds them to get ready at (1\text{:}30)
- Patient falls? (\rightarrow) Calls their daughter within seconds
It speaks in short, warm sentences — like a friend who never tires of the same question.
How We Built It
A single agentic loop orchestrated by Railtracks, with (8) tool nodes:
[ \text{Speech} \xrightarrow{\text{classify}} \text{Intent} \xrightarrow{\text{gather}} \text{Context} \xrightarrow{\text{reason}} \text{Action} \xrightarrow{\text{TTS}} \text{Response} ]
- Vision: Claude analyzes camera frames in (5)-second windows, tracking objects with confidence scores
- Tools: Calendar, medication tracking, routine logging, weather, object memory, emergency calls (Twilio), speech (TTS)
- State: In-memory context for real-time queries (+) SQLite for persistent medication and routine data
- Data: Nexla pipelines sync care data for family visibility
Challenges
The hardest problem was tonal. An Alzheimer's patient may ask the same question (5\times) in an hour. The system can never sound annoyed — never say "I already told you." We spent more time on the system prompt than any single tool.
The second hardest: teaching the agent when not to speak. Someone peacefully watching TV doesn't need an AI interrupting. Silence is often the right answer.
What We Learned
Building for vulnerable users changes every decision. A (500)-error can't just crash — it has to say "I'm sorry, could you try again?" in a warm voice. The "agent" paradigm fits perfectly: not a chatbot, but a presence that watches, waits, and steps in only when needed. That's what a great caretaker does.
Log in or sign up for Devpost to join the conversation.