Surgentic Project Story: Revolutionizing Surgical Safety and Education
The Problem
Medical professionals often work under grueling schedules that can lead to fatigue and performance decline. Overworked surgeons may inadvertently skip critical mental checklists meant to ensure patient safety. Despite the success of initiatives like the WHO Surgical Safety Checklist—which has been proven to reduce post-surgical mortality by 36.6% when properly implemented—traditional paper-based systems see a completion rate of just 37% during surgical signouts. However, studies show that digital checklist aids can push compliance close to 100%. This gap underscores an urgent need: leveraging digital technology not only to support surgical safety but also to empower medical residents with real-world practice and continuous feedback.
Our Solution
We built a multimodal surgery data collection and observability platform designed specifically for white-collar healthcare operations. Our platform captures everything that happens in the operating room—integrating vision and audio data—to ensure surgical practices are safe, compliant, and educational. Key features include:
Comprehensive Data Integration:
Our system collects and synthesizes multimodal data from surgeries, acting as an advanced digital assistant. It aids surgeons in real-time compliance, replacing the need for traditional medical scribes and live EHR systems.Educational Enhancement:
Addressing the shortage of real-world surgical practice for medical residents, our platform acts as an autonomous end-to-end educational assistant. It provides live feedback during surgery and generates comprehensive post-operation reports, with both qualitative and quantitative assessments.Foundation for AI-Driven Training:
The anonymized, aggregated data forms the basis for training models on successful surgeries, creating opportunities for continuous learning and improvement. By ensuring that data is aggregated at scale without compromising individual anonymity, we face minimal resistance from stakeholders.
How We Built It
Our development journey was rooted in the challenge of achieving seamless multimodal data interoperability. We conducted extensive trial and error, fine-tuning models for each type of data through rigorous A/B testing to perfect our project flow. Key technical highlights include:
Data Fusion in the OR:
We set out to capture a diverse range of data—visual, audio, and video—from the operating room, wanting our system to provide both real-time and post-operation analysis.Conversational Voice Assistance:
Leveraging Eleven Labs, we built an augmented voice agent that interacts with surgical staff. This agent is supported by advanced search tools like Perplexity Sonar, which provide contextual awareness and sophisticated reasoning.Real-Time Compliance and Reporting:
Our platform not only monitors compliance before, during, and after surgery but also delivers real-time conversational feedback. Post-surgery, it generates detailed reports that offer:- An overview of the procedure (both pre-surgery and during surgery)
- Scoring metrics for educational assessment.
- Actionable insights for improvement.
By integrating various AI tools with a deep understanding of the surgical environment, our platform enhances patient safety, streamlines surgical workflows, and transforms the educational landscape for future medical professionals.
Challenges we ran into
- Integrating many different APIs into one project
- Infrastructure for collecting data for AI agents was limited and hard to find
- Designing a functioning front-end with almost zero front end experience
- Being forced to pivot and change our idea halfway through the hackathon
Accomplishments that we're proud of
- Worked with multiple modalities (video, audio, vitals, text, images)
- Integrated an end-to-end product for the whole surgical process, from pre-surgery, to during, to post-surgery.
- Successfully built a functional MVP within 36 hours!
What we learned
- How to integrate multiple APIs efficiently
- The importance of UX design for user experience
- Team collaboration and time management in an environment where we are forced to iterate and fail over and over again in a short period of time.
User Interviews
- Conducted user interviews with multiple doctors and learned that extending our medical alerts system to monitor patient vitals would be ideal. This is why we started working with Terra API and incorporated sample data in order to simulate monitoring vitals and queue alerts based on this data.
- Acquired one LOI from a doctor holding purchasing power in his hospital (note: this hospital is based outside of the US).
What's next for Surgentic
- Inject compliance requirements from major hospital chains
- Run a pilot study
- Refine and fine-tune AI models for better latency.
- Incorporate real-life and real-time patient vitals into our solution.
Built With
- css
- elevenlabs
- gemini
- javascript
- neon
- openai
- perplexity
- python
- react
- supabase
- terra-ai
- typescript
Log in or sign up for Devpost to join the conversation.