Inspiration

During their senior year of high school, one of our team members underwent cancer surgery and was cleared for discharge soon after. Although clinically stable, a request was made to stay one additional night.

That night, a serious complication developed, which required immediate intervention and extended inpatient care. In that moment, it became clear how much can hinge on decisions made with limited foresight.

The experience highlighted a broader issue: hospital decisions about discharge, bed allocation, and staffing are often made in the moment, without strong tools to anticipate short-term risk or coordinate resources dynamically.

Across the U.S., inefficiencies in hospital flow cost over $250 billion annually, while nurses and care teams spend significant time navigating fragmented systems rather than focusing on patients.

We built Atria to explore how hospitals can shift from reactive operations to proactive coordination that better supports both patients and care teams.


What it does

AI Command Center for Operators

  • Atria features a high-speed multi-agent system orchestrated by GPT-5-mini, which coordinates specialized Risk, Patient, and Nurse agents to automate complex allocations.
  • To ensure human-in-the-loop oversight, operators can use a GPT-5-powered reasoning chatbot to query the logic behind specific staffing decisions, all while monitoring a real-time observability stream of live task execution.

Smart Handoffs & Context for Nurses

  • We bridge the communication gap between shifts using OpenAI Whisper to transcribe patient-nurse conversations into actionable clinical summaries.
  • These insights, along with automated schedules and to-do lists, are delivered directly to frontline staff via automated email briefings, ensuring every nurse has full patient clinical information and visit reminders before entering a room.

Interactive 3D Spatial Visualization

  • The platform replaces static spreadsheets with a high-fidelity Digital Twin built in Three.js.
  • Users can dive into a 4-floor, 60-room interactive hospital model or navigate a 3D global network to visualize federated learning across sites. This spatial interface allows toggling between Hospital, Patient, and Provider views to see real-time occupancy and assigned schedules.

How we built it

High-quality clinical data construction

  • Filtered through over 3 million patients using generated, complete EHR data from Synthea.

Proprietary model training

  • Trained a supervised learning model using logistic regression to predict whether an ED patient will become inpatient (bed need).
  • Trained a second model using a gradient boosted model to predict a patient’s length of stay.
  • For both models, medical observations taken during the ED serve as features.
  • We also simulated multiple hospitals, trained a localized model per hospital for each task, and used federated learning to train a global/shared model for both tasks.
  • This allows for under-resourced hospitals to benefit from our models without having access to any sensitive patient data.

Multi-agent system via OpenAI

  • Built three agents (risk agent, resource agent, nurse agent).
  • Developed custom tools including prediction of bed need, prediction of length of stay, resource filtering, greedy allocation, batch allocation, and more.
  • Used GPT-5-mini as the backbone via the OpenAI API.

Real-time activity monitor

  • As the system allocates resources, the multi-agent system emits events via WebSocket to enable full observability of the decision-making process.
  • This includes all agent inputs, tool calls, and outputs.
  • We also included a chatbot for administrators to ask questions about logs from the multi-agent system (powered by GPT-5 via the OpenAI API).

Conversation analysis via OpenAI

  • Allows nurses to record conversations with patients during rounds and save these notes in the schedules of future nurses visiting the same patient, helping contextualize care.
  • Nurses also receive email notifications prior to scheduled patient visits.

Frontend

  • Next.js, React, Tailwind, Three.js (for 3D hospital model)

Backend

  • Python for model training, data preparation, multi-agent system, and conversation analysis.

Challenges we ran into

Data quality

  • Ensuring balance between positive and negative cases across datasets.
  • We repeatedly expanded the dataset to ensure sufficient volume and diversity for training goals.

Preventing decision conflicts

  • Coordinating the risk, resource, and nurse agents required clear role boundaries and arbitration logic to avoid contradictory actions.
  • We implemented priority rules and a centralized allocation validator to ensure recommendations (e.g., high admission risk) aligned with real-time resource availability before execution.

Real-time streaming

  • Building the WebSocket-based activity monitor required designing structured, high-throughput event pipelines capable of streaming every agent input, tool call, and allocation decision without introducing latency.
  • We also ensured ordering guarantees and fault tolerance so administrators could reliably observe and audit system decision-making in real time.

Nurse feedback loop

  • The nurse feedback loop closes the circle: nurses submit workload and acuity feedback (e.g., overwhelmed, missed visits), and the next scheduling run adjusts assignments accordingly.
  • This required reliably connecting multiple components across the system to ensure feedback translated into updated scheduling behavior.

Accomplishments that we're proud of

GPT-5 & Three.js Integration

  • Successfully bridged a complex multi-agent Python backend with a 3D frontend, enabling live UI updates as agents complete tasks.

Ambient Clinical Intelligence

  • Built a seamless pipeline that converts raw audio into structured, nurse-ready briefings, reducing administrative "paperwork" time.

Closed-Loop Feedback

  • Implemented a system where nurse reports of "overload" or "missed visits" are instantly ingested to re-calibrate the next scheduling run.

Spatial Operations

  • Developed interactive model for a 56 room hospital to make to visualize resource distribution.

What we learned

Orchestration

  • Using GPT-5-mini for fast task execution and GPT-5 for reasoning proved that specialized agent roles outperform single-model solutions.

Trust through Observability

  • In healthcare, showing the "thinking process" of agents through Server-Sent Events is essential for user trust and safety.

Spatial State Management

  • Syncing a React UI with a Three.js canvas taught us how to handle complex reflows to keep 3D models and schedule panels aligned.

Privacy via Federation

  • We realized the future of medical AI lies in federated learning, allowing small clinics to benefit from global models without ever moving raw EHR data.

What's next for Atria

New agents

  • Discharge planning: Coordinate patient transitions and anticipate readiness.
  • Transfer coordination: Coordinate inter-unit transfers (e.g. ICU step-down) with placement and discharge.

Technical improvements

  • Reinforcement learning from clinician and operations feedback.
  • ED surge forecasting using historical and external signals.
  • Include existing inpatients in scheduling so full census drives placement and nurse rounds.

Longer-term vision

  • Pilot testing with realistic hospital datasets and workflows (e.g. with local hospital partners).
  • Single operational view across ED, inpatient units, ICU, and discharge so capacity and flow are coordinated.
  • Hospital digital twins so planners can run what-if scenarios for staffing and capacity (e.g. flu surge, new unit) before changing real operations.

Built With

Share this project:

Updates