About the project

This project turns any article, website, or text into a transparent, claim‑by‑claim dashboard with verifiable citations and spoken source readouts for rapid trust assessment.​ Each claim is extracted, checked against credible references, and returned with citations from compiled sources all on the local machine.​

Inspiration

The team is motivated by the reality of hedge fund trading and the difficulty of distinguishing alpha from noise when decisions need to be made efficiently and confidently. Hedge funds require privacy so our focus was making verification fast and rigorous whilst running all the compute locally.

How we built it

Frontend: React app for the main UI plus a browser extension that extracts user highlighted snippets and provides a quick verdict with source links and quotations in a sidebar next to the content

Backend: Python FastAPI service orchestrating local LLMs with tool‑calling to retrieve sources, score relevance, and generate concise, citation‑first summaries per claim.​

Pipeline: Text is segmented, candidate claims are detected, retrieval tools gather supporting/contradicting evidence, and the LLM composes verdicts with a lightweight confidence score over ranked evidence.​

Extension: The web extension communicates with the backend, annotates pages, and provides a side panel for citations and quotes.​

What we learned

Running LLMs locally with tool call support and thinking Prompt design for reliable tool‑use is as important as model choice; structured tool responses and schema validation drastically reduce errors in later steps.​ On the UX side, users trust short, source‑led answers more than long narratives, so citations are more skimmable as the user reads down the UI.​

Challenges

Hallucination control: enforcing tool‑only evidence, disabling “no evidence” or "has not happened yet" generations, and requiring each claim to be backed up by a source.​

Latency and locality: running local LLMs with tool calls takes more than a minute to finish thinking for a long article so a requirement would be to stream thinking data to the frontend to keep the user informed.​

What’s next

Expand connectors for more domains and formats like video and images, introduce batch verification for newsletters, and continue optimising local inference for all round efficiency.​

How to use it

Open any article with the extension enabled or paste text into the web app, then review the claim list, skim verdicts, and audit citations.​

Built With

Share this project:

Updates