Inspiration
Every year, health insurers deny 15-20% of all claims, and increasingly use AI to do it at massive scale. Yet fewer than 1% of patients ever appeal, even though 50-80% of appeals get overturned when they reach independent review. The gap isn't because appeals don't work. It's because 69% of denied consumers don't even know they have the right to appeal, and those who do face incomprehensible jargon, fragmented regulations across four legal frameworks, and strict deadlines they don't know exist. The laws to protect patients already exist. People just don't have the tools to use them. That's why we built InsureBetter.
What it does
InsureBetter helps people fight wrongful health insurance denials. Upload your denial letter and the system extracts the key details, looks up the relevant regulations from a curated legal database, and drafts a properly structured appeal letter with real citations. It supports ACA Marketplace, Medicare, ERISA employer plans, and Medicaid tracks, selecting the right regulations and process for each. The LLM never does freeform legal reasoning. It extracts facts, a deterministic lookup provides the legal basis, and the LLM composes the letter from verified components. No one should lose an appeal just because they didn't know how to ask.
How it works
InsureBetter runs a three-stage pipeline. First, the user uploads their denial letter (PDF or photo) and the system extracts the raw text using OCR. A language model then pulls out structured fields like the CARC denial code, procedure info, insurer name, and appeal deadline. Second, the extracted denial code hits a deterministic lookup against a curated database of ACA, Medicare, ERISA, and Medicaid regulations. This step is entirely rule-based. The CARC code maps to the relevant legal track, which returns the exact statutes, deadlines, appeal process, required evidence, and escalation paths. No LLM involved, no hallucination risk. Third, the language model composes a draft appeal letter using the extracted facts and the verified legal context from the lookup. Every citation in the letter traces back to the database, not to the model's own reasoning. Users can also enter their denial details manually if they don't have the letter on hand, and a built-in chat lets them ask questions or request targeted edits to the generated letter without starting over.
What we learned
LLMs are most reliable when you constrain their role. Splitting extraction, lookup, and generation into separate stages gave us much better accuracy than letting the model do end-to-end legal reasoning.
The insurance appeals landscape is far more fragmented than we expected. Mapping CARC codes to regulations across ACA, Medicare, ERISA, and Medicaid meant dealing with four completely different sets of deadlines, escalation paths, and procedural requirements.
Knowing when to say "I don't know" matters. In a legal/medical context, we learned it's better to flag low confidence and ask the user than to guess and risk generating a letter with the wrong legal basis.
Building a curated legal database by hand is slow but worth it. The quality of the generated letters is only as good as the lookup data behind them.
What's next for InsureBetter
Looking ahead, we want to expand our legal database to cover state-specific Medicaid regulations beyond the federal floor, since appeal processes vary wildly by state. We'd also like to add support for uploading multiple documents at once (like attaching supporting medical records alongside the denial letter) so the system can pull in stronger clinical evidence automatically. Longer term, we're interested in tracking appeal outcomes to learn which strategies actually work for specific denial types and insurers, and feeding that back into the system to improve the letters it generates over time.
Log in or sign up for Devpost to join the conversation.