NeuroVault helps people with memory loss: caregivers upload photos + context, it auto-indexes them, can snap a photo to identify who/what it is and instantly get the matching memory explained simply.
Memory loss can turn normal moments into stressful ones
We wanted something that helps people recognize familiar faces and objects quickly without making them navigate a bunch of menus
Caregivers already “hold the context” so we built a system that captures that context once and then makes it instantly retrievable for the patient
What it does
NeuroVault is a caregiver assisted memory system for people with memory loss.
Caregiver side: Upload a photo and a short explanation like who it is and why it matters then it gets -stored in a database and automatically indexed into a semantic brain so it is searchable in natural language
Patient side: Take a photo then NeuroVault identifies who or what they are looking at using a vision service and instantly pulls the matching memory and explains it in simple language
The goal is fast private recall with almost no steps for the patient
How we built it
Patient UI: Next.js and React for camera capture and an accessible one tap “Who is this” flow
Caregiver portal: Next.js page that uploads text and an optional image
Storage: MongoDB as the source of truth for memories and metadata
Vision service: FastAPI and DeepFace using Facenet512 embeddings plus a labels file that maps reference images to memory IDs hosted on Hugging Face Spaces
Reference images: Stored in a Hugging Face dataset and pulled at runtime with snapshot_download
Indexing: An ingest script that batches pending MongoDB memories and uploads them into the semantic search namespace for retrieval
Automation: A cron style runner using GitHub Actions schedule to periodically ingest pending memories
Challenges we ran into
Hugging Face Spaces rejected pushing binary image files so we moved references to a Hugging Face dataset and downloaded them at runtime
Repo structure changes broke relative paths and caused missing image errors until we normalized paths and validated the snapshot directory
Vercel to vision API calls needed careful routing and headers to avoid CORS and proxy issues
MongoDB Atlas connectivity caused 500 errors due to IP allowlisting and connection settings
Datetime and metadata formatting mattered a lot for ingestion reliability and retries
Accomplishments that we're proud of
Built a working end to end flow from caregiver upload to database to auto indexing to patient capture to identification to memory recall
Deployed a vision microservice on Hugging Face Spaces that bootstraps itself by downloading reference data on startup
Improved the patient UX by removing extra steps and auto triggering identification after capture
Designed a simple ingestion pipeline with status tracking and retries so it can run unattended
What we learned
Deploying ML is as much about packaging storage and environment variables as it is about the model
Having a single source of truth in MongoDB made debugging and automation much easier
Small UX changes like one tap vs multiple buttons make a big difference for accessibility
Cron based ingestion works well if you build idempotency and clear retry behavior
What's next for NeuroVault
Add authentication and tighter privacy controls including per patient access and encryption at rest
Improve recognition accuracy with better reference collection calibration and smarter thresholds
Expand beyond faces to more object memories and multi photo profiles per person
Move from scheduled ingestion to near real time ingestion using a queue and worker
Add caregiver tools to edit memories merge duplicates and view an audit history
Log in or sign up for Devpost to join the conversation.