Inspiration
Our inspiration came from a frustrating but common story: patients who leave the hospital too early only to return a few days later, sicker, more discouraged, and at greater financial risk.
One of our teammates experienced this firsthand. They were discharged after an illness with the assumption that they could recover at home. Instead, they had to be readmitted the very next day. This made us realize that discharge decisions are often based on limited information, and patients rarely get the structured follow-up they truly need.
We dug into the problem and discovered: • 1 in 5 patients is readmitted within 30 days of discharge. • Preventable readmissions cost the system over $25 billion annually. • Current predictive tools are often black boxes that clinicians don’t trust, and post-discharge monitoring is usually fragmented or nonexistent.
We wanted to build something different, a platform that provides transparent, actionable insights at discharge and seamless support after discharge. That vision became JeeVan.
What We Learned
• Healthcare problems are multi-layered: predicting readmissions is not enough — clinicians need trust in the model, and patients need support after discharge.
• Explainability matters as much as accuracy: using SHAP + OpenAI taught us how to transform cryptic ML drivers into human-friendly explanations.
• Nudges work: we learned about behavioral science and how small, actionable reminders can significantly change patient adherence.
• Full-stack integration is hard but rewarding: connecting a FastAPI ML backend with a React frontend through ngrok, with all the CORS battles, showed us how important infrastructure is to healthcare tech.
• Hackathon teamwork: balancing data science, backend, and frontend contributions in a short timeline taught us how to scope features while still delivering a polished product.
How We Built It
1. Dataset and Model Training
• We trained our in-house ML model on the NAH dataset of more than 30,000 real-world readmission cases and vitals.
• We combined CatBoost for high predictive power with Logistic Regression for interpretability.
• We evaluated performance with ROC-AUC, optimizing for both accuracy and reliability.
2. Explainability Pipeline
• We used SHAP (Shapley Additive Explanations) to break down risk scores into their key drivers.
• We then used OpenAI to translate those SHAP outputs into clinically meaningful explanations, bridging the gap between raw numbers and medical intuition.
3. Risk Dashboard
• Providers see live patient risk scores, visual explanations of the drivers, and OpenAI-generated natural language summaries.
• This helps doctors understand why a patient might be at risk, not just that they are.
4. Nudges Integration
• For each patient, we surface nudges — simple, evidence-based interventions that can reduce readmission risk (for example, reminders to schedule an endocrinology follow-up for a diabetes patient).
5. Discharge Flow
• When a provider clicks “Discharge,” the system transitions from prediction mode to monitoring mode.
6. Post-Discharge Monitoring Dashboard
• Patients and families see a personalized dashboard that includes:
• Medication reminders
• Appointment scheduling
• Lifestyle recommendations
• Clear explanations of their risk and what they can do about it
Challenges We Faced
• Data preprocessing: Cleaning, normalizing, and encoding categorical medical variables from the NAH dataset took significant effort.
• Model tuning: Balancing the tradeoff between interpretability and accuracy when combining CatBoost and Logistic Regression.
• Integration issues: ngrok tunnels + frontend and backend on different laptops led to recurring CORS problems. We had to configure FastAPI middleware carefully to avoid failures.
• Time pressure: Designing not just a predictive model but also an explainability pipeline, nudges system, and monitoring dashboard — all in under 48 hours — pushed us to the edge.
• Trust problem: Making sure the app didn’t feel like a “black box.” This required thoughtful integration of SHAP visuals and OpenAI explanations.
• UI/UX alignment: Ensuring providers saw actionable insights without overwhelming them, while patients saw supportive nudges without medical jargon.
Accomplishments We’re Proud Of
• Built a hybrid ML model that achieves strong ROC-AUC on 30K+ NAH readmission cases.
• Developed a clear explainability pipeline (SHAP + OpenAI) that turns opaque ML outputs into intuitive explanations.
• Implemented nudges that bring behavioral science into the clinical workflow.
• Designed a dual-mode platform: predictive risk scoring for providers, and monitoring + reminders for patients.
• Created a demo-ready, full-stack application integrating data science, backend, and frontend in under 48 hours.
What’s Next
• Scale the backend: Migrate from SQLite to PostgreSQL, and deploy via AWS/GCP.
• Integrate with EHR systems: Sync directly with hospital workflows for seamless adoption.
• Expand monitoring: Add SMS/WhatsApp reminders and wearable integration for patient engagement.
• Model improvements: Incorporate real-time vitals and lab results into the prediction model.
• Clinical validation: Partner with hospitals to run pilot studies, ensuring JeeVan works in real-world settings.
Closing
Our motto is simple: “The right information, to the right provider, at the right time.”
By combining risk prediction, explainability, and continuous monitoring, JeeVan reduces preventable readmissions in a way that benefits everyone: • Hospitals save millions • Insurers save billions • Patients save their health, time, and peace of mind
That’s why we’re submitting this under Curator’s Cause, Emerging, and the Impiricus challenge.
Built With
- github
- javascript-?-backend:-fastapi-?-frontend:-react.js-?-ml-frameworks:-catboost
- languages:-python
- render/railway
- scikit-learn
- shap-?-explainability:-openai-api-?-database:-sqlite-(demo)-/-postgresql-(scalable)-?-tools:-ngrok
Log in or sign up for Devpost to join the conversation.