Github link: https://github.com/RohanJoshi28/SmartChart

Inspiration

We have a certified EMT on our team who understands the difficulties EMTs face when recording information, and EMTs face high pressure situations where they have to come up with the right diagnosis and treatment. We came to the conclusion that reducing time spent on patient documentation and protocols can allow EMTs to shift more of their time and focus on actual patient care. To this end, we developed SmartChart, a webapp and deployable machine designed for more efficient and consistent delivery of EMS care.

What it does

EMTs are required by law to meticulously record their patient interactions, from vital signs to EMS interventions, and many do so with pen and paper. Not only is this inefficient, but it can also lead to inaccuracies and miscommunication. SmartChart offers a fully virtual solution to this problem, allowing EMTs to log all relevant information into an electronic chart. What makes our application unique is the use of modern technology such as artificial intelligence, computer vision, optical character recognition, and wireless communication to not only save the EMT time on call, but to reduce the chance of malpractice and guide the EMT towards the correct treatment decisions.

Core functionality

At the core of SmartChart is the patient charting software that allows the EMT to digitally record patient demographics, vitals, presentation, and medical history, and export the data to whoever is receiving the patient. In order to take vitals more efficiently, we’ve designed the first machine learning model capable of patient documentation through the EMTs voice alone. It intelligently detects and responds to a variety of speech patterns and order, and can process multiple vital inputs at once. Furthermore, if the EMT initially took handwritten notes on the patient and now wants to log it digitally, SmartChart is equipped with optical character recognition that allows for seamless insertion into the chart.

Recommended Actions Page

In concert with the patient charting page is the recommended actions page, which updates dynamically with the inputted patient information using a novel algorithm tailored for EMS care. With this algorithm, the page will only present protocols for the most likely conditions the patient presents with, allowing patient care to start as soon as possible. If any treatment/medication is contraindicated for the patient given the inputted patient information, the app will notify the EMT within all relevant protocols, reducing the chance of harmful mistakes.

Patient Assessment Page

In addition to providing fast diagnostics for a patient based on the vitals logged by the EMT, SmartChart also supports diagnostics powered by computer vision. Photos of the patient’s symptoms can be uploaded to the patient assessment tab, and the app will classify the patient's condition using a custom machine learning model trained on common patient presentations, including urticaria, edema, cyanosis, facial drooping, and more. Once the patient presentation is classified, the app will present all of the relevant protocols and information at the EMTs fingertips.

Patient Communication and Transport

To help overcome language barriers, we created the patient communication page, which offers the EMT on-demand translations that can be audio or text-based in 10 different languages. Lastly, the transportation page is designed to alert the EMT about inclement weather conditions in the area and provides a list of the nearest facilities, including hospitals, burn centers, and trauma centers, among others—the inputted patient information is also used to provide a custom facility suggestion tailored to every patient.

How we built it

We built our project using Streamlit, an python framework that allows us to host and create a website using python widgets.

For the conversion of natural forms of text such as handwriting into the patient's inforation fields, we used the google cloud vision API and optical character recognition to process the text, and used large language models fine-tuned on our data to separate the data into the necessary fields such as patient's name, heart rate, temperature, blood pressure, etc... We also allowed EMTs to use their phones/computer's microphone, transcribed the voice data into text, and then used large language models to separate this text into the necessary fields.

For transportation, we used conditionals to process the patient data entered by the EMTs and find the possible conditions that the patient might be facing using medical knowledge. After this, we used google maps to find the relevant centers that treat that condition close to the

For the patient assessment tab, we trained a pytorch convolutional neural network to detect different common problems such as edema, facial drooping, and burns for the patient assessment tab.

Challenges we ran into

Some challenges we ran into is finding the best optical character recognition engine for our needs. Open source models like tesseract and EasyOCR weren't very accurate, so we decided to use the cloud vision API, which was extremely accurate on handwriting.

What's next for SmartChart

We will make sure that users must be certified EMTs in order to ensure that they have the background knowledge and training necessary to correctly carry-out suggested actions. Our software will be. completely open source and free to use. Our goal is to get the system in as many ambulance corps as we can.

Built With

Share this project:

Updates