Auris

AI-powered early detection for Alzheimer's, combining voice, brain imaging, and continuous patient monitoring into one clinical platform.

The problem

80% of Alzheimer's patients miss the only treatment window that actually matters. That window is two years. Two years right after early onset where intervention can slow the disease. Miss it and your options narrow dramatically.

This isn't a funding problem. The tools doctors have are just not built for early detection. Yearly checkups don't have enough resolution. MRIs don't get approved until decline is already visible. By the time a patient presents with noticeable symptoms, the window is usually already gone.

We think continuous AI monitoring can fix this. Not as a replacement for doctors , but as the layer between appointments that has never existed before.

What it does

Auris monitors patients across three independent data sources, voice, brain scans, and conversation history, and feeds all of it into a clinical dashboard where doctors can actually understand what's happening, not just what happened at the last visit.

01 Voice analysis CNN + TCN model trained on DementiaBank. Scores every 30-second audio batch for dementia probability in real time. 02 MRI brain imaging AssemblyNet segments 133 brain regions. Volumetry compared against 3,000+ healthy subjects. Interactive 3D viewer. 03 Clinical AI agent Ask anything in plain language. Chain-of-thought reasoning across voice, MRI, and session history. Alerts fire automatically. How it works

01 Patient talks to an AI companion Real-time video sessions via Beyond Presence + LiveKit. Feels like a friendly check-in. Runs from home, as often as needed.

02 Audio is captured and scored 30-second WAV batches go to our custom voice model. MFCC features extracted. Dementia probability score returned per session, building a longitudinal baseline unique to that patient.

03 MRI scans are segmented on AWS EC2 Doctor uploads a .nii.gz file. AssemblyNet (VolBrain, University of Bordeaux) segments 133 brain regions, measures each volume, flags anything outside normative bounds. Renders as an interactive 3D point cloud in Three.js.

04 The AI agent connects everything Built on Anthropic Claude, orchestrated via Daedalus Labs. Semantic search over pgvector embeddings surfaces relevant data before the heavy reasoning runs. Ask it anything about any patient.

05 Alerts fire when it matters When cognitive decline crosses a threshold, the system flags it immediately. Doctors don't wait for the next appointment to find out something changed.

What we're proud of

A voice dementia detection model running inference on real patient audio, not just a wrapper around an existing API

A 3D interactive brain viewer rendered directly from MRI segmentation output in Three.js/WebGL

A natural language clinical assistant that answers specific questions about specific patients with actual reasoning

One end-to-end pipeline connecting patient conversation all the way to a doctor alert — three independent AI systems, one coherent product

The hard part

Getting three completely independent AI systems to communicate meaningfully was the core challenge. A custom PyTorch voice model, a Docker-containerized MRI segmentation model on EC2, and a multi-agent LLM pipeline are not naturally designed to talk to each other.

The vector database was the piece that made it work. Every session generates embeddings stored in Supabase with pgvector. When the agent gets queried, semantic search surfaces the relevant data first before the heavier models run, keeping the whole thing fast and cost-effective.

What's next

Real-time caregiver portal : a separate view for family members to log observations and receive non-clinical updates Proactive alerting : push notifications to doctors the moment a patient's risk score crosses a threshold, not just when they log in HIPAA compliance : integrating with compliant infrastructure to move toward real-world pilot testing with clinicians

📚 Research Foundation

Our voice analysis approach is grounded in published clinical research on acoustic biomarkers for dementia detection. AssemblyNet's normative bounds are derived from 3,000+ cognitively healthy subjects across validated neuroimaging datasets. https://doi.org/10.1186/s13195-021-00888-3

Share this project:

Updates