Inspiration
Debatable started with a group of friends who couldn’t agree on anything - except that our feeds were showing us completely different versions of the world. We noticed how bias slips into almost everything online, from random recipe blogs to serious news.
Then we thought: what if we could see what’s actually debatable on the internet as we browse, and use that to form a more complete opinion? For example, if an article asks, “Will AI take over jobs?”, Debatable would highlight that sentence and encourage counter-searches or Devil’s Advocate perspectives.
We just wanted to make reading online a little less mindless - and maybe spark a few good debates along the way.
What it does
Debatable isn’t just another fact-checking tool. Instead of labelling content as true or false, it highlights sentences that are debatable and gives them a confidence score. You can then use Gemini-powered tools to dig deeper, question the claim, or see it from another perspective - turning reading into a more thoughtful, interactive experience.
How we built it
Chrome Extension (MV3) Architecture
- Content Script (src/contentScript.js)
- Extracts and segments declarative sentences from the DOM
- Streams classification in batches, progressively highlighting inline text
- Injects a floating legend and an interactive side panel for visibility and navigation
- Background Service Worker (src/background.js)
- Orchestrates classification, caching (24‑hour TTL), and settings
- Normalizes outputs (category, confidence, rationale) and records light telemetry for debug
- Options/Popup/Panel UI (src/ui/*)
- User controls for categories, privacy mode, prompt/debug settings
- Rich panel for inspecting flagged statements and model reasoning
Chrome Built‑in AI (Gemini Nano via Prompt API)
- Primary path: Uses Chrome’s on‑device Gemini Nano model to classify batches of sentences
- Prompt engineering:
- Compact, structured prompts with a definitions block for each category
- Strict JSON response schema (category, confidence, rationale) and resilient parsing
- PII‑aware sanitization pipeline when “Privacy mode” is enabled
- Robustness and UX:
- Progressive batching keeps pages responsive; users see highlights appear in real time
- Heuristic fallback if the model is unavailable (e.g., unsupported browser) with clear messaging
- Debug mode surfaces prompts, raw responses, and estimated token usage for iteration
Caching & Performance
- In‑memory LRU‑ish cache mirrored to chrome.storage.local with a 24‑hour TTL
- Hash keys scoped by page URL + model version to avoid cross‑site leakage
- Batch processing with granular progress and ETA estimates
Design & UX
- Glassmorphism legend, accessible tooltips, and keyboard‑friendly navigation
- Color‑coded categories are user‑configurable (ID, label, definition, colors)
- Side panel consolidates all flagged statements with search‑like scanability
Why Built‑in AI?
- Privacy: No API keys, no network calls, no data leaves the device
- Latency: Instant responses after one‑time model download
- Reliability: Works offline and avoids rate limits or quota exhaustion
Technical Highlights
- MV3 service worker + content script event pipeline
- Structured prompt templates with category definitions and schema validation
- Safe JSON decoding with fallbacks and reasoning truncation
- Lightweight DOM walker for accurate sentence ranges and stable highlighting
- Clear separation of model logic, caching, and UI for maintainability
Challenges we ran into and What we Learned
A major challenge was managing the unpredictable latency of the on-device Gemini Nano model. Since its processing speed is highly sensitive to the varying content load of different webpages, we experienced wide speed fluctuations.
This challenge gave us crucial insights into the limitations of on-device LLMs and the non-negotiable need for performance-aware development. We mitigated this variability as much as possible and sped up processing through two optimizations:
- Strategic prompt engineering
- Fine-tuning the input batch size
Accomplishments that we're proud of
Our biggest win was finding the sweet spot between speed and context — making sure results stayed accurate without slowing things down. Getting there involved a lot of trial and error with testing and prompt design. By testing and refining, we improved how well the model could highlight debatable text while also cutting processing times. Some pages can still take a few minutes, but for most pages we have cut the time in half compared to where we started. Figuring out how to balance context and efficiency turned out to be one of the most valuable lessons of the project.
What's next for Debatable
Right now, Debatable highlights debatable claims and hyperboles on every webpage you visit, providing a confidence score along with a brief explanation. In the future, we plan to integrate a Devil’s Advocate feature directly into the tool - so it can suggest counterarguments both through reasoning and by referencing relevant content online.
Built With
- chrome
- client-side
- css-animations
- css-grid
- css3
- dom-api
- flexbox
- gemini-nano
- html5
- javascript
- manifest-v3
- on-device-ai
- privacy-first
- prompt-api
- real-time
- responsive-design
- scripting-api
- side-panel-api
- storage-api
- streaming-api
- summarizer-api
- vanilla-js
- web-components

Log in or sign up for Devpost to join the conversation.