Inspiration
Dementia often goes undetected until significant cognitive decline has occurred. Early screening through everyday interactions — speech, behavior, and emotional cues — could help identify at-risk individuals sooner, when intervention is most effective. We built Lifeline to make passive, non-invasive cognitive monitoring accessible.
What it does
Lifeline is a multimodal health screening platform that analyzes speech, video, and emotional signals to assess dementia risk. It processes audio through Modulate AI for transcription and sentiment analysis, uses Reka for visual/behavioral observation from video, and feeds the combined data into a Pioneer AI fine-tuned GLiNER-2 classifier trained to detect cognitive markers like word-finding pauses, reduced vocabulary, repetitive patterns, and simplified syntax.
How we built it
We built the backend in Python with an async architecture using httpx for non-blocking API calls. The pipeline chains three AI services: Modulate AI handles speech-to-text and emotion detection, Reka analyzes video for behavioral cues (posture, activity, environment, concerns), and Pioneer AI's custom fine-tuned model (dementia-risk-clean-v2) performs the final risk classification. We used structlog for structured logging and designed the system with graceful fallbacks — mock responses for development and null returns on API failures.
Challenges we ran into
Integrating three separate AI APIs with different response formats and latency profiles required careful orchestration. Fine-tuning the Pioneer GLiNER-2 model to reliably distinguish dementia-risk speech from normal speech with a binary classifier was difficult — we lose nuance without confidence scores. Constructing the right input text that gives the model enough signal without overwhelming it with irrelevant context was an iterative process.
Accomplishments that we're proud of
We successfully fine-tuned and deployed a custom dementia risk classifier on Pioneer AI's platform. The multimodal approach — combining linguistic analysis with behavioral and emotional context — gives a more complete picture than any single signal alone. The system is designed to degrade gracefully: if any service is unavailable, the pipeline continues with whatever data it has.
What we learned
Speech transcripts carry far more diagnostic signal for cognitive screening than we initially expected. Binary classification is a reasonable starting point but insufficient for clinical nuance — confidence scores and multi-tier risk levels matter. Building reliable systems on top of multiple third-party AI APIs requires defensive programming at every integration point.
What's next for Lifeline
Adding longitudinal tracking to monitor cognitive trends over time rather than single-session snapshots. Incorporating confidence scores from the classifier for more granular risk levels. Expanding the model to detect other cognitive conditions beyond dementia. Working toward clinical validation with healthcare professionals and exploring integration with telehealth platforms for seamless referrals.
Built With
- fastino
- modulate
- neo4j
- reka
Log in or sign up for Devpost to join the conversation.