-
-
Top down view of our 3D hospital. With reception, ICU, general ward, discharge and staff room.
-
Dr. Patel making decisions during the live simulation.
-
Situation report generated at every tick. Shows general stats and a short overview paragraph.
-
Event log showing doctors choices and patients movement at every tick.
-
Different doctors with different levels of work and therefore different AI decisions.
-
End of simulation analysis report.
-
End report generated by AI at end of a simulation detailing considerations and next steps.
Inspiration
Healthcare systems around the world face constant resource pressure - too few doctors, too few beds, too many patients. We wanted to visualise what that actually looks like in real time, and ask: can we use AI agents to model hospital entities and their interactions? C Clinic was born from the question of whether LLM-backed agents could model the complexity and cascade effects of a hospital under stress.
What it does
C Clinic is a real-time hospital simulation where every patient and doctor is a live AI agent. Doctors use LLM decision-making and personas to make triage decisions - choosing who to treat next based on severity, wait time, and bed availability - and can explain their reasoning on demand. Users can watch the simulation run at normal pace, then trigger a Mass Casualty Surge or Staff Shortage with a single click and observe the impact: ICU fills up, queues back up, throughput drops. A live map shows agents moving between the Waiting Room, General Ward, and ICU, while charts track occupancy, queue depth, and throughput in real time. Every parameter - arrival rate, tick speed, doctor count, bed count - can be tuned live from the control panel without restarting. When the simulation is stopped, an LLM-written Session Report analyses the full run - identifying phases, key interventions, mortality rate, and strategic recommendations.
How we built it
We split the project into four parallel workstreams with clean interfaces between them:
Simulation Engine (Python + asyncio) - stateless tick loop managing patients, doctors, wards, and queues LLM Layer (Anthropic SDK) - injected as a callback into the engine; doctors call Claude Haiku for triage decisions and on-demand explanations Backend API (FastAPI + WebSockets) - broadcasts SimulationState every tick; exposes REST endpoints for scenarios and config Frontend (React 19 + TypeScript + Zustand + Recharts + Tailwind CSS v4) - live hospital map rendered with animated icons, three real-time charts, and an entity detail panel that shows AI summaries The LLM layer is decoupled via an LLMInterface protocol, so the engine runs and is fully testable with rule-based fallback logic when no API key is present.
Challenges we ran into
Keeping the simulation fast with async LLM calls - doctor decisions are non-blocking, but we had to be careful not to let queued LLM requests cause drift between ticks WebSocket fan-out at high tick rates - broadcasting full state every second to multiple clients required careful serialisation and connection management Coordinating four independent workstreams - agreeing on data contracts upfront (shared SimulationState types) was the only thing that made parallel development possible
Accomplishments
Every entity on the map is a genuine AI agent - not a scripted animation but a live decision-making loop. The explainability panel: clicking a doctor and reading its LLM-generated reasoning in real time is useful for analysis. At the same time, without an API key, it falls back to rule-based logic and still runs a full demo.
What we learned
LLM agents work best when you give them a tight, structured context window, as verbose prompts with irrelevant state produce worse decisions and higher latency. Decoupling via protocol interfaces (LLMInterface) was worth the upfront overhead; it let us test the engine independently and swap in the real LLM with zero changes to simulation logic.
What's next for C Clinic AI
Multi-hospital routing: ambulances should choose between hospitals based on real-time capacity. Real data integration would seed arrival rates and severity distributions from NHS or public hospital datasets for realistic scenario modelling. Additionally, live expansion of the hospital on a 3D model would allow hospitals to import their layout and model the simulation accurately in their hospital.
Built With
- node.js
- openrouter
- python
- react
- tailwind
- three.js
- typescript
Log in or sign up for Devpost to join the conversation.