Inspiration

Medical errors remain the third leading cause of death in the U.S., often stemming from
fragmented patient data and clinicians struggling to see the full picture across disparate notes, labs, and imaging. We wanted to build a tool that doesn't just aggregate medical
records — it reasons about them, surfacing the hidden causal chains between conditions
that drive patient outcomes.

What it does

MedGraph ingests any patient medical document — clinical notes, lab reports, imaging results, vitals — and uses AI to extract structured medical findings, then automatically constructs a causal reasoning graph that maps how conditions relate to and influence each other. The system renders an interactive human body model where affected organs glow by severity, alongside a dynamic causal graph showing pathways like diabetes → nephropathy → anemia, giving clinicians an instant, intuitive understanding of a patient's full clinical picture.

## How we built it

We built a FastAPI backend with async PostgreSQL for structured storage of patients, documents, findings, and causal relationships. Google Gemini 2.0 Flash powers a two-stage AI pipeline: first extracting structured medical findings from raw documents using constrained JSON output, then performing causal reasoning across all findings to identify relationships with confidence scores and medical mechanisms. The frontend is a Next.js app with a custom SVG anatomical body model with severity-based glow effects and a React Flow causal graph with dagre auto-layout, all wired together with cross-component selection syncing.

## Challenges we ran into

Getting Gemini to reliably output structured medical data with consistent terminology across different document types required significant prompt engineering — especially ensuring causal relationship identification used exact condition name matching. Building the interactive body visualization that maps cleanly between medical body systems and anatomical regions while maintaining a polished dark-theme aesthetic was another major challenge, as was syncing selection state across three independent visualization components in real time.

## Accomplishments that we're proud of

The causal graph generation genuinely surfaces medically accurate relationships with explanations of the underlying mechanisms — it doesn't just link conditions, it explains why diabetes causes nephropathy at the molecular level. The full pipeline from raw document upload to an interactive, explorable causal model happens in seconds, and the cross-linked body model and graph create an experience where clicking a glowing kidney instantly highlights every condition in the causal chain affecting renal function.

## What we learned

We gained deep insight into medical ontology design — how to structure body systems, severity scales, and causal relationship types in a way that's both computationally useful and clinically meaningful. We also learned how powerful constrained JSON output from LLMs can be for building reliable data extraction pipelines, and how much impact good visualization design has on making complex medical data immediately comprehensible.

## What's next for MedGraph

We're building a self-correcting reasoning engine — when the model predicts a clinical trajectory (e.g., worsening kidney function) and follow-up labs contradict it, MedGraph will update its internal causal weights, learning patient-specific pathophysiology over time. We also plan to add multi-modal document ingestion for medical imaging, temporal trend analysis across visits, and a clinical decision support layer that proactively flags high-risk causal pathways before they manifest.

Built With

Share this project:

Updates