Inspiration

A medical student spends 4 years in school, then 3-7 years in residency, and for most of that time they're watching not doing. The first time many residents actually hold a laparoscopic instrument and cut tissue, there's a real patient on the table. That's terrifying also Surgical simulation exists, but it costs $100,000+ per machine. Most medical schools have one or two simulators shared across hundreds of students, and you get maybe 30 minutes a semester on them. The rest of the time you're watching YouTube videos of surgeries and hoping you remember the steps when it matters.

And it's not just about access it's about feedback. When a student does get time on a simulator, there's usually no one standing next to them explaining what they're doing wrong. Attending surgeons are busy. Residents are overworked. The student makes the same mistakes over and over because nobody told them they were holding the grasper wrong or that they just clipped the common bile duct instead of the cystic duct. This is a life-and-death problem. Medical errors are the third leading cause of death in the US. Many of these errors happen because surgeons didn't get enough practice before they had to perform on real patients. We asked ourselves: what if every med student could practice surgery unlimited times, from their laptop, with an AI coach watching their every move and giving them real-time feedback? No $100K machine. No scheduling conflicts. No waiting for an attending to be free. Just open your browser, pick up your instruments, and start learning. That's why we built LifeSaver.

What it does

You open LifeSaver in your browser. That's it no downloads, no expensive hardware. Just your laptop and a webcam. You're standing inside a fully built 3D operating room. There's an operating table with a
patient, surgical instruments on a mayo stand, a doctor across from you, and a skeleton in a glass case for reference. You can walk around the room, look around, and interact with everything. There are three ways to use it: Learning Mode — You click on any organ and it lights up. An AI voice (powered by ElevenLabs) starts explaining what that organ does, where it is, what blood supplies it, what a surgeon needs to be careful about. You can ask follow-up questions out loud — "What happens if the cystic artery is accidentally cut?" — and the AI answers you in natural speech. It's like having a professor standing right next to you in the anatomy lab, except available 24/7.

Practice Mode — You hit "Start Practice Run" and the system walks you through a full laparoscopic cholecystectomy (gallbladder removal) — the most common surgery residents learn first. It tells you which instrument to pick up, shows you exactly where to place it with glowing target zones, warns you if you're near a danger zone, and an AI coach gives you real-time tips. Your webcam tracks your hand movements using MediaPipe so you're actually using your hands, not just clicking buttons.

Surgery Mode— Same idea but harder. Instead of the full abdomen model, you're working on
individual organs, heart, kidney, digestive system placed on the table. The zones are
tighter, the steps are different, and you need to be more precise. This is meant to feel closer to what an actual procedure feels like.

After every session, LifeSaver doesn't just give you a letter grade and move on. It takes
everything you did how fast you completed each step, whether you used the right instruments,
how many times you entered a danger zone, how consistent your timing was and encodes it all
into a 32-dimensional performance vector. Think of it as your surgical fingerprint for that
session. These vectors are stored over time. When you come back a week later and do another session,
LifeSaver uses cosine similarity to compare your new performance against all your past sessions. It shows you exactly where you improved, where you're still struggling, and finds sessions with similar patterns so you can spot trends. It's not just practice — it's measurable, trackable
skill development.

How we built it

Frontend: React Three Fiber for the 3D operating room with real medical organ models. MediaPipe tracks your hands through the webcam to control instruments. AI: Google Gemini for coaching, organ explanations, and answering voice questions. ElevenLabs for natural spoken responses. Backend: Express.js + MongoDB Atlas for session storage. Custom 32-dim vector embedding
engine that encodes each surgery into a performance fingerprint for similarity search and skill tracking over time.

Challenges we ran into

Translating 2D webcam hand positions into accurate 3D instrument movement on the operating table. Medical 3D models come in wildly different scales one kidney was the size of the room. Merging code from three people working on different systems at the same time. Making the AI
coach helpful without being annoying.

Accomplishments that we're proud of

A full surgical simulator that runs in a browser tab no downloads, no VR headset, no expensive equipment. The 32-dim vector embedding system that turns every surgery session into a searchable performance fingerprint. An AI coach that actually knows what step you're on and warns you before you make a mistake.

What we learned

How badly surgical training needs accessible simulation tools. Vector embeddings work for encoding human performance, not just text. Integrating multiple AI services (vision, language, voice) into one real-time app is the real engineering challenge.

What's next for LifeSaver

More procedures beyond cholecystectomy. MongoDB Atlas Vector Search for persistent skill tracking at scale. Instructor mode where a teacher can watch and guide students live. Partnering with medical schools to get this in front of students who need it.

Built With

Share this project:

Updates