Inspiration HearSay was born out of the need to address a terrifying statistic from the NIH which states that patients forget 40% to 80% of medical information immediately after leaving the doctor. Of the information they do remember, nearly half is recalled incorrectly. We realized this is not just a memory problem but a fundamental design failure in healthcare. This friction disproportionately affects elderly patients and non-native speakers, leading to medication errors, missed appointments, and worsening conditions. We wanted to build a tool that acts as a second brain in the room to actively advocate for the patient rather than just passively recording the conversation.
What it does HearSay is an offline-first medical advocacy agent that transforms chaotic doctor-patient conversations into structured and life-saving data. It runs securely in the background during appointments to distinguish between casual small talk and critical clinical advice. Post-visit, the application generates a strict schedule of tasks such as medication dosages and appointment bookings. Uniquely, the system cross-references the symptoms the patient reported against the doctor's diagnosis to detect gaps in care. If a patient mentions chest pain but the doctor performs no cardiac checks, the app flags this discrepancy so the user can follow up.
How we built it Since this is an ideation track, we focused on system architecture and feasibility validation to ensure this concept could be built in a four-week sprint. We approached the system with a zero-trust mindset to prioritize patient privacy above all else. We designed the frontend using React Native to ensure immediate accessibility across both iOS and Android devices. To handle sensitive audio without data leaving the device, we selected a quantized version of OpenAI Whisper converted to CoreML for on-device speech processing. The logic layer relies on local Small Language Models to extract action items, ensuring that no patient data ever touches a cloud server.
Challenges we ran into Our primary obstacle was resolving the tension between powerful AI utility and strict data privacy. Standard approaches rely on cloud APIs which pose significant compliance risks for a healthcare application. We had to determine how to achieve high-level intelligence on a mid-range smartphone without internet access. We overcame this by deeply researching edge computing and model quantization to confirm that modern smartphones can handle clinical-grade transcription locally, allowing us to bypass the need for external servers entirely.
Accomplishments that we're proud of We are particularly proud of designing the logic flow for discrepancy detection which transforms the app from a passive recorder into an active advocate. By mapping out how a specific symptom entity must trigger a corresponding treatment entity, we created a verifiable framework for identifying when a patient's concerns have been overlooked by their provider. This shifts the paradigm of health apps from simple tracking tools to active partners in patient safety.
What we learned This process taught us that the future of personalized healthcare lies in Small Language Models rather than massive cloud-based systems. We discovered that prioritizing user privacy drives better engineering decisions and that edge computing is sufficiently advanced to handle complex medical advocacy tasks without compromising data security. We learned that the barrier to entry for secure AI in healthcare is lower than expected if one commits to local processing.
What's next for HearSay Our immediate goal is to transition from this architectural validation to a functional minimum viable product using the defined technology stack. Following development, we plan to initiate a pilot program specifically for cardiology patients to measure adherence rates before expanding the language models to support non-native speakers who face the highest risks of medical misunderstanding.
Log in or sign up for Devpost to join the conversation.