Inspiration
In Roman mythology, Salus was the goddess not just of healing, but of public health and safety. We chose this name because we believe that medicine needs to be accessible and vigilant, provided not just for those who can afford it, but for collective welfare.
This mission is personal. We have seen loved ones - friends with chronic conditions and family members with preventative diseases - navigate a healthcare system that is often reactive rather than proactive. The urgency of this was fully realized when one of our own team members spent hours in the Stanford ER for a standard ECG and X-ray, a visit that resulted in no actionable insights but instead in a $4,000 bill, an obvious sign of the prohibitive cost of a system that only reacts when a patient is in crisis. A patient might appear to be “fine” during a five-minute clinic visit, but their biosignals, such as the rhythm of their heart or the oxygen in the blood, might be signaling a trajectory of deterioration that no human eye could possibly catch.
Currently, a patient’s medical story is scattered across different platforms: an ECG result, a chest X-ray, and pages of scribbled notes. These modalities are not unified in real-time. Our vision is a singular, fully integrated system, where these signals are completely brought together. Salus is an always-on guardian which utilizes a three-tier monitoring system, using simple wearable data to trigger clinical-grade reasoning when a patient needs it most. Additionally, Salus doesn’t just flag unusual data, it also compares live biometrics to the patient’s unique history and predicts potential issues based on other, similar patient cases. Most importantly, Salus is designed to explain its thinking. It translates complex data into a clear differential diagnosis, which allows patients to understand their alerts without a clinical visit.
What it does
Salus, our 24/7 medical assistant, is designed to be a revolutionary addition for patient insight into their own health as well as improving advanced, low-cost medical reasoning for people regardless of socioeconomic background. At its core, our system aims to provide end-to-end detection of anomalies, diagnosis of >65,000 diseases, and analysis of personal medical data for patients.
Our solution offers the following features:
- Multimodal Input: We process all sorts of clinical data including, but not limited to CT Scans, MRI images, Ultrasounds, Chest X-Rays, SOAP Notes, ECG, PPG, and SPO2 data.
- Computer Vision/NLP Pipeline: We either train or leverage open-source models to process each individual modality to create a robust embedding representations.
- Medical Reasoning Engine: Patient-specific insights on healthcare data (can answer a wide variety of questions about image and text-based data).
- Orchestrator: Agent exposed to several tools that converses with the medical reasoning engine, ultimately diagnosing or providing insights to the patient.
- “Always On” Classifier: Decision Tree runs on extracted features from ECG/PPG data on 10-second “bursts” from data collected in 30-second intervals throughout the day.
- Phone Calling: Upon anomaly detection, we automatically call the patient to talk about their condition. Patients can chat with it continuously like they would with a clinician.
How we built it
In building Salus, we focused on integrating an end-to-end clinical intelligence loop. Our solution is built on the following technological framework:
- Multimodal Data Ingestion
- Accepts clinical images and documents, including CT, MRI, ultrasound, chest X-ray, SOAP notes, ECG/PPG, and SpO2 streams.
- Uploaded records are normalized and stored in a patient-centric data layer in FHIR format.
- Text and image artifacts are indexed into embeddings for downstream retrieval and context building.
- Computer Vision + NLP Representation Layer
- Uses open-source medical encoders to convert image/text records into robust embedding representations.
- Handles long-form clinical text via chunking + aggregation to preserve signal from full records.
- Supports similarity search over a patient’s historical context for fast retrieval of relevant evidence.
- Medical Reasoning Engine
- A multimodal clinical model (MedGemma) performs patient-specific reasoning over retrieved records (image + text data) and vitals context.
- The system uses raw clinical content for interpretation while embeddings drive retrieval and ranking.
- Outputs include explanatory insights, differential-style reasoning, and next-step suggestions.
- Agent Orchestrator
- A tool-using agent manages when to query patient history, retrieve past processed patients with similar demographics, retrieve external evidence (for example, similar-case literature), and proactively “ask” the medical reasoning engine for context.
- This structure enables broad disease reasoning (targeting >65,000 conditions) while remaining grounded in the patient’s own data.
- Patients can always chat with this agent to learn more about their data, ask about potential conditions, etc.
- Always-On Anomaly Detection
- A lightweight decision tree classifier monitors extracted ECG/PPG/SpO2 features in recurring windows (10-second bursts sampled within 30-second intervals).
- When a persistent anomaly pattern is detected, Salus escalates automatically.
- For text chat, anomaly context is prefilled for user approval before sending; for voice, escalation can continue immediately.
- Proactive Voice Outreach
- On anomaly trigger, Salus can place an automated phone call via Twilio and begin a real-time clinical conversation.
- Voice interactions are tool-aware and patient-context aware, enabling follow-up questions like a live triage assistant.
- This creates a 24/7 safety net for users who need immediate guidance without waiting for a clinical workflow (especially helpful for patients in remote regions!).
Overall, our approach is low cost and high coverage. With a continuous care loop (detect -> reason -> explain -> escalate), patients don’t have to pay thousands of dollars to receive high-quality personalized medicine anywhere, anytime.
Challenges we ran into
Our journey building Salus came with major technical and product challenges. One of the toughest was data fragmentation across modalities. Biosignals, radiology images, and clinical text all arrive in different formats and standards, so we had to build dedicated preprocessing pipelines and modality-specific model paths before we could normalize everything into a unified patient context. Getting those components to interoperate reliably in real time took multiple rounds of schema design, edge-case handling, and debugging across both frontend and backend systems.
Another key challenge was getting wearable signal data into our web workflow. Apple does not natively support direct streaming from Apple Watch data into a browser app, so we implemented a custom API layer to bridge watch-originated signal payloads into Salus. That required careful work on ingestion, timing, and reliability so the downstream anomaly detector could operate on consistent windows instead of sporadic updates.
We also had to design for incomplete or missing modalities, since real-world patient records are rarely complete. Many patients have partial notes, missing images, or sparse vitals. To address this, we built the agent and reasoning stack to degrade gracefully—using whatever evidence is available, surfacing uncertainty explicitly, and still producing clinically useful guidance rather than failing when data is sparse. This became a core design principle for making Salus practical outside idealized datasets.
Accomplishments that we're proud of
Looking back at Salus, we are proud most of all of building something with real clinical and social impact. The idea that someone can receive continuous monitoring, anomaly detection, medical reasoning support, and proactive outreach regardless of socioeconomic background is exactly why we built this. Creating a system that can help patients better understand their own health in moments that matter gives our work deep purpose.
We are also proud of the technical scope we were able to deliver in a short timeframe. Integrating multimodal clinical ingestion, embedding/retrieval pipelines, agentic tool orchestration, real-time vitals simulation and anomaly escalation, and automated Twilio voice follow-up into one coherent product was a major systems challenge. Moving from an ambitious concept to a functioning end-to-end prototype that demonstrates practical, low-cost, AI-assisted care is a reflection of our team’s execution, resilience, and belief in the mission.
What we learned
Building Salus was a crash course in interdisciplinary execution. We learned that meaningful healthcare AI requires much more than model performance in isolation: it demands tight coordination across multimodal ML, backend reliability, product UX, and clinical framing. Bringing imaging, notes, and biosignals into one coherent reasoning loop showed us how critical systems integration is when decisions are time-sensitive.
We also learned the value of rapid iteration under real constraints. Many of our biggest improvements came from debugging live integration points—model serving, retrieval quality, agent tool behavior, webhook reliability, and escalation timing—then quickly closing the loop. That process reinforced how essential pragmatic engineering decisions are for turning an ambitious concept into a usable end-to-end prototype.
Most importantly, we learned that patient-centered design must drive every technical choice. Features like explainable responses, graceful handling of missing records, consent-aware text escalation, and proactive voice outreach only mattered because they were built around real user needs. Keeping accessibility and equity at the core changed how we thought about success: not just building advanced AI, but making advanced care guidance available to anyone who needs it.
What's next for Salus
Looking ahead, our focus is on evolving Salus from a functional prototype into a clinically deployable health intelligence platform. We want to build toward closed-loop care workflows, in which Salus can not only identify risk, but also recommend and take action on next steps (like follow-up imaging, lab tests, or case escalation). Specifically, the most immediate improvements we can imagine are:
- Integrate FHIR data natively into an EMR like Epic, Cerner, etc.
- Reduce size of trained models via distillation and ONNX formatting
- Voice cloning and latency reduction in the phone call
- Live benchmarking on patients and a larger subset of test-time data
In the long term, we imagine seeking FDA-approval to enable distribution to consumer devices throughout the world and launch clinical trials to comprehensively sanity check the efficacy of our method.
Log in or sign up for Devpost to join the conversation.