Inspiration

Both of us come from medical families and have lived the immigrant experience of being childhood translators in healthcare settings. We've felt the weight of translating critical medical information we barely understood ourselves. DiagnoSure was born from this reality—our vision of healthcare that transcends language barriers while preserving the humanity that makes medicine meaningful. Because in the end, quality healthcare shouldn't depend on your ability to speak English; knowledge and equity are universal rights.

What it does

Our project is separated into 3 phases:

  1. "Pre-screening" phase, where we gather preliminary information about the patient's illnesses and symptoms. From this, users talk to the model which tries to narrow down their potential conditions. Once enough information has been retrieved for the model to feel confident, this is then compiled into a report that the primary physician can use to accelerate treatment times. In this stage, we support virtually any language, allowing users to communicate their symptoms in a language familiar to them, boosting overall diagnosis accuracy. 2) "Validation" step, where we provide doctors with an extra "set of eyes" that can verify diagnoses being made. By taking into consideration all available data and information provided by the patient, we're able to cross-check what the physician says, potentially identifying subtle, yet valuable, inconsistencies that may save a life. Our solution here is unique due to the breadth and scope of the data we plan on employing. 3) "Post-Diagnostic" explanation. For many people, medical terminology is overly complex and does not actually provide a suitable explanation of why someone is sick. But, simultaneously, doctors are incredibly busy and cannot function as educators for every single patient. As a solution, we have trained a model that is capable of providing each patient a breakdown of their medical information (again, taking into consideration their entire history and illustrating a cohesive narrative that allows them to fully understand what is going on with their health).

How we built it

We built this project with an assortment of generative AI, employing SOTA techniques such as Retrieval Augmented Generation, as well as certain bleeding-edge APIs like OpenAI's RealTime voice/transcription AI that additionally enables live translation. With the flexibility of these models and techniques, we were able to process immense amounts of data which significantly boosted the accuracy of our project. Plus, we were able to leverage their multilingual abilities to increase the equity of healthcare. By empowering individuals to speak in the language they are most comfortable with, they are more likely to be able to describe their specific illness.

Challenges we ran into

We ran into many issues with MongoDB and setting up the connection pools. Additionally, we were running into rate limit issues very frequently with the realtime API (since the limits are incredibly low). As a result, this slowed testing down significantly, but we were still able to bring it all together for this project!

Accomplishments that we're proud of

Honestly, we're both very proud that we were able to make it this far. This was definitely a very tough hackathon in terms of mental exhaustion, so being able to sit down and still push out a project was something really special to us. We just hope that you all love it as much as we do!

What we learned

With this project, we pushed our knowledge in terms of the applications of AI. We had never thought to use it to bridge the language barriers between individuals and often thought that we would need more traditional translation tools for it. To our surprise, however, the Realtime API provided by OpenAI did a phenomenal job detecting languages and responding in them (while still providing us an English transcription!). As a result, we began to understand the potential AI has in expanding opportunities to everyone, regardless of background. I also learned a ton about augmenting AI and fine-tuning it with things like RAG which is, overall, what seems to be the future of AI.

What's next for #11 DiagnoSure

We envision this product to be more tightly integrated with hospitals and existing EHR pipelines. There are already apps in existence that aim to make medical charts accessible to patients, but patients are never granted any assistance in the interpretations of these charts. We see our product as a way of bridging this knowledge gap. If we can bridge the language barrier as well, which our product has already demonstrated the capability of doing, then we become one step closer to affordable and equitable healthcare. If doctors can accurately understand a patient's symptoms, then they will have a higher chance to accurately treat the patient as well.

Built With

Share this project:

Updates