Inspiration
Our project has an initial interest in a computer vision project. This interest was further motivated by attending a talk about safety in medicine which sparked our interest in developing patient safety solutions. A pivotal moment came when one of our team members consulted with a friend specializing in Biomedical Engineering about surgical procedures. This conversation revealed that although cameras are often present in operating rooms, they are underutilized for vision-based applications.
Another team member recalled a subplot from Grey’s Anatomy where a surgical team inadvertently left a towel inside a patient’s body after a procedure. This error, while fictional, should never occur in real-world scenarios. Upon further investigation, we discovered that more than 1,500 cases of retained surgical items occur annually in the United States alone. This insight cemented our idea: to create a system that tracks items entering and exiting a patient's body during surgery to prevent such incidents.
What It Does
SafeOps is an innovative application designed to address the serious issue of Retained Surgical Items (RSIs) in healthcare. By leveraging real-time camera monitoring within the operating room, SafeOps automatically tracks surgical instruments throughout the procedure. This ensures no instrument is accidentally left inside the patient, enhancing patient safety and preventing surgical errors.
How We Built It
SafeOps was developed using a modern tech stack, with a React-based front-end and a Computer Vision-powered backend supported by MongoDB. The front-end utilizes React and Material UI components, emphasizing simplicity and clarity—key elements in medical settings.
For the backend, we trained an in-house YOLOv5 model using a combination of everyday items and medical instruments from the Mayo Clinic instruments dataset. By integrating YOLOv5 Ultralytics and deepSORT, we enabled the system to capture images from any camera, classify surgical tools, and track them in real time. OpenCV assists with detection and tracking, while Flask APIs facilitate communication between the front-end and back-end. MongoDB manages the tracking data, storing key details such as instrument ID, entry/exit timestamps, and the instrument’s status (inside or outside the patient).
Challenges We Faced
Developing SafeOps came with its share of challenges. On the front-end, optimizing the video stream to minimize latency required significant trial and error. On the back-end, preprocessing multiple datasets for our YOLO model proved to be complex. We had to standardize the bounding box labeling and classifier values for both everyday and surgical items.
Training the model initially posed issues as well, since running on a CPU led to lengthy processing times. To overcome this, we migrated to PACE-ICE and utilized an Nvidia GPU, reducing training time significantly and resulting in a robust YOLO model for object classification.
Another hurdle arose with image tracking—OpenCV initially struggled with accuracy, often double-counting items or slowing down during detection. However, through continuous testing and refinement, SafeOps emerged as a functional proof of concept, demonstrating successful tracking with household items. We are eager to further develop this capability for actual surgical tools in operating room environments.
Accomplishments We're Proud Of
We are immensely proud of creating an MVP for SafeOps. One of our major achievements is the seamless integration between the front-end and back-end, allowing for a live-streamed video feed and an accurate surgical item tracking system.
On the front-end, the clean UI and live video capabilities were crucial milestones. On the back-end, we are particularly proud of our in-house YOLO model, which performs well in classifying both everyday and medical items based on our aggregated dataset. The process of configuring the environment and integrating multiple frameworks and libraries was an invaluable learning experience. Despite encountering numerous bugs and challenges along the way, we found the journey to be as rewarding as the destination.
Lessons Learned
Through SafeOps, our team delved into a wide array of technologies and frameworks, including React, OpenCV, DeepSORT, PACE-ICE, Flask, YOLO, and MongoDB. Each of us took on responsibilities that pushed us out of our comfort zones, mastering these tools to build a fully functional application. The collaborative nature of our team was key to our success, as we supported each other through the process of debugging and development. Planning our milestones and dividing tasks thoughtfully also helped set us up for success, ensuring a smooth workflow throughout the project.
What’s Next for SafeOps
Having successfully validated SafeOps by tracking household items, our next step is to refine the system for surgical environments. We plan to expand its capabilities by enabling it to recognize a broader range of surgical instruments and adapt to challenging conditions, such as low visibility or visual obstructions.
Additionally, SafeOps will be extended to track medication usage during surgeries, which will automate patient billing and streamline hospital inventory management. In the future, SafeOps will not only track items within the body cavity but will monitor instruments throughout the entire surgical process. By collecting this data, hospitals can optimize resource usage, reduce costs, and improve compliance with surgical protocols.
With these enhancements, SafeOps will evolve into a comprehensive solution that elevates both safety and operational efficiency in healthcare settings.
Log in or sign up for Devpost to join the conversation.