Inspiration

We are both pre-medical students who volunteer at Ascension Seton Hospital. At the hospital, we noticed nurses are overworked and are too busy to respond to all their patients. We want to fix this by making it easier for our nurses to respond to their patients while also giving our patients a better hospital experience.

What happens

Today, when patients need help, they have a call light or a phone that they ring to notify the nurse of their need. Then, the nurse picks up the phone and, based on their request, visits their room to help them. However, because of how tedious and long this process is and how many requests nurses receive at a time, they can’t answer all the phone calls and stop responding to patients.

Our Solution

To fix this, we’ve built a platform that allows patients to use a portal to voice record their needs to their nurse. Our platform then sends this request to the nurse, who sees a list of all requests prioritized based on severity and recency. This centralizes all the requests and makes it easier for nurses to respond quickly to all requests.

Our Portals

We have two portals: a nurse portal and a patient portal. To access these, nurses first log on to two devices: their device to see requests and the patient’s monitor to allow them to second a request. Then, the patient is ready to send requests. Once the nurse responds to the request, they hit ‘resolve.’

The Language Barrier

The language barrier is a huge problem in the medical field. To address this, our platform also allows four different languages: English, Spanish, French, and German, and it translates patient requests for nurses into English.

Other Built-In Features

We have other built-in features as well. For the patient, we’ve made it easy for them to view their current providers, when their next meals arrive, when they take their next medications, their release date, and their vitals to improve transparency. For the nurses, we give them a way to access not only active requests but also previously resolved requests and total requests. If a nurse is too busy and needs help, they click ‘Help’ next to the request to ask a nurse with the lowest responsibilities on the floor to respond to the patient.

How we built it

In our project, we included two LLMs: the whisper model to transcribe and translate patient requests and the GPT-3.5-turbo model to prioritize requests based on severity and recency. We coded parts of the backend and these LLMs in Python. We built the rest of the application using React (Typescript), Tailwind, Vite, and MongoDB.

Challenges we ran into

We tried to generate mock data to train our own model to prioritize requests, but insufficient data was available to make our model effective. We currently use the GPT-3.5-turbo model, but in the future, we want to collect a large dataset and train the model ourselves.

Accomplishments that we're proud of

We're proud of the complexity and design of our project. We spent a long time ensuring the UI was readable for patients and nurses, mainly since this platform will be used in a busy environment. We are also proud of adding many built-in features, such as allowing patients to access information about their vitals, timelines, and providers, based on our experiences with patients wanting this information in the hospital.

What we learned

This was our first time implementing LLMs into a full-stack project. We learned how versatile they can be and how they can add a lot more complexity and depth to our ideas.

What's next for Call Light

Instead of using OpenAI's GPT-3.5-turbo model, we want to build our own model with data collected from Ascension Seton Hospital. We also want to accommodate many more languages, such as Hindi and Chinese, that are common in patient populations.

After this hackathon, we want to introduce this platform to Ascension Seton Hospital and implement this technology on their patient floors.

Built With

Share this project:

Updates