Inspiration

Last year, our teammate Matthew tore his ACL in a sports injury. He spent 12 months recovering and going to physical therapy nearly every week. When listening to the stories of his recovery journey, we realized that today's medical system is far from perfect.

While the doctors have been more than amazing, the current medical administrative system is filled with inefficient paperwork. All patients, no matter their personal circumstances, get the same generic medical questionnaires to report their condition. They only get to discuss their personal conditions during the short period of face-to-face appointments with their doctors, which do not happen frequently.

For patients, they lack a channel to report the personal information crucial to their care, while the bureaucratic paperwork leaves them with a feeling of alienation and disregard. For doctors, the paperwork creates inefficiency and waste of valuable time, which could have been used to help more patients. We aim to resolve the issue on both sides.

What it does

MedForm is a more than a simple AI form generator. We transform boring medical forms into interactive AI conversations that provide better accessibility and dive into patients' true needs. MedForm makes filling out forms feel like having a face-to-face conversation with a healthcare provider.

  1. Generate Form with one click: At first, a healthcare provider can ask MedForm to generate a form by simply describing it. For example, "Generate a weekly recovery assessment survey for my patients with ACL injuries". In a few seconds, the provider will see an AI-generated form with questions that fit his or her need. Generate-Form-with-one-click.png
  2. Interactive Forms: Then the provider can share the MedForm link to the patients. What makes our forms unique is that no two patients get the same questions. Our AI asks question based on the patient's previous responses and asks follow-up questions to discover more insights. Our AI also gives feedbacks to patient's responses to take care of the patient's feelings and give immediate medical suggestions. Interactive-Forms.png
  3. Personalized Response, Like a Conversation: To fulfill accessibility needs by certain patients, our AI can read out the question and the patient can answer by speaking with the AI, just like having a conversation with the healthcare provider. In addition, studies show that seeing related imagery help people resonate with the questions and thus improving the response quality. Therefore, we have background videos related to each of the questions. Personalized-Response-Like-a-Conversation.jpg
  4. Query chatbot for accurate qualitative analytics: After the provider collects the responses, he or she can ask our AI for insights from the patients' responses. For example, "How did Matthew's feeling of knee pain change over the last few weeks?" Our AI saves provider's time from going through single form responses. Compared to traditional form analytics that only work with quantitative responses, the query feature allow providers to gain insights from qualitative responses, which are conversations that shows deeper patients' needs. Query-chatbot-for-accurate-qualitative-analytics.png

How we built it

First, we feed the provider’s instruction templated in a well-fined prompt to GPT-4-Turbo. We applied several prompt engineering techniques like one-shot prompting and OpenAI’s good practices so that GPT generates the form information that we want, such as the questions and video keywords.

We use the video keywords to search on Pexels for background videos related to the question. In the form, we OpenAI Whisper to speak out the question and Deepgram to allow user to speak and transcribe to response in real-time.

Lastly, we store the form responses to MongoDB (will implement soon, currently using local storage). When the provider query to get insights from a patient, GPT fetches the patient’s past responses, and GPT put together an insight message to inform the provider.

Challenges we ran into

When implementing text-to-speech API, we stored the audio as an MP3. However, after deploying our Next.js project to Vercel, the API failed to work. It took us some time to debug and spot the issue, which is Vercel doesn't have a persistent file system. Therefore, we changed from writting the file to returning a blob containing the audio data back to frontend.

During our prompt engineering process, we found it hard to extract exact value from GPT's output due to its randomness. After some brainwork, we thought that instructing GPT to output in JSON format could fix its output to a extractable format. This method was found to be helpful across all our prompts to GPT.

What's next

We plan to add the following features:

  • A form editor that allows the provider to see AI-suggestions questions and edit them before generation
  • An interative chat that allows the provider to chat with AI to improve the generated form
  • A more comprehensive portal where the provider can view responses and AI-summarized analytics
  • Security features to verify provider's identities and privacy features to store patients' responses following HIPPA

Built With

  • deepgram
  • gpt
  • nextjs
  • tailwind
  • whisper
Share this project:

Updates