Vigil is a Chrome extension that flags rhetorical patterns likely to trigger cognitive biases while you read web content. It suggests neutral rewrites and alternatives, helping you slow down and process information more rationally.
- Detects cognitive bias triggers in page text (SemEval-style 14-technique taxonomy)
- Highlights flagged spans inline
- Shows findings in a sidepanel with explanations and severity
- Offers actions per item: rewrite, alternatives, or hide
- Supports multiple analysis paths:
- fast local regex plugin (no LLM needed)
- optional LLM plugin in-browser (WebLLM), local API (Ollama), or cloud API
- optional backend plugin server (
FastAPI) for additional analyzers (for example: moralization detection)
- Readers who want support processing information more rationally
- Researchers evaluating rhetoric and persuasion patterns
- Developers building bias-aware analysis plugins
annotated UI guide (A-G)
This example shows Vigil running on a real social post, from detection to mitigation.
Use the labels below to map each part of the UI quickly.
Ahighlighted phrase in the page text that was flagged as a cognitive bias triggerBinline explanation card with trigger label and rationaleCsidepanel "Currently Viewing" block synced with the visible contentDfindings summary row with count, label, and severity chipEmitigation actions (Rewrite,Alternatives,Hide)Freformulated output marker shown after rewrite is appliedGsettings tab entry point for sensitivity, plugin, and model/backend configuration
settings panel quick guide
- analysis: set sensitivity (
Low,Medium,High) and optional auto-action (do nothing,rewrite,show alternatives,hide content) - plugins: choose which detectors run in the browser (regex, LLM, or both)
- plugin server: connect to
http://localhost:8787(or your custom URL) to enable server-side analyzers - llm provider: choose backend (
Local API,Cloud API,Browser AI) and set model/API fields - reset defaults: restore all settings to the default profile
- Chrome 118 or newer
- Node.js 18+ (for local build)
- Optional:
- WebGPU-capable browser/GPU for WebLLM mode
- local Ollama for local API mode
- cloud API key for cloud mode
cd frontend
npm install
npm run buildThen in Chrome:
- Open
chrome://extensions - Enable Developer mode
- Click Load unpacked
- Select
frontend/dist - Pin
Vigiland open the sidepanel from the extension action
If you want server-side analyzers (for example moralization-llm):
cd backend
uv sync
uv run python app.pyDefault server URL in the extension is http://localhost:8787.
- Treat results as prompts, not verdicts. A highlight means "look closer," not "this is false."
- Start with medium sensitivity. Move to high only when you need maximum recall and can tolerate more false positives.
- Prefer regex mode for quick scanning. Use LLM mode when you need richer explanations and reformulation.
- Verify important claims separately. Vigil checks rhetorical risk, not factual correctness.
- Keep auto-actions conservative.
do nothingorshow alternativesis usually safer than auto-hide. - Export cache before sharing examples. Remove personal or sensitive text from exported JSON.
- Use cloud mode carefully. If you paste private content, assume it may be sent to your provider.
- Extension settings are saved in
chrome.storage.sync. - If you use cloud mode, API keys are stored in extension settings and synced by Chrome.
- If you use local API mode (for example Ollama), inference stays on your own machine.
- If you use backend plugins, text is sent to the configured server URL.
If you handle sensitive data, use local-only paths (regex + local API/WebLLM), and avoid cloud providers.
frontend/— Chrome extension (TypeScript + Vite)backend/— optional plugin server (FastAPI + LiteLLM)
Frontend watch build:
cd frontend
npm install
npm run devBackend dev server:
cd backend
uv sync
uv run python app.pycd frontend
npm run type-checkcd backend
uv run pytestAlso:
- manually test on at least one long-form article and one X/Twitter thread
- confirm no secrets are committed (
.env, API keys, credentials)
- Keep plugin boundaries clean. Put frontend analyzers in
frontend/src/pluginsand server analyzers inbackend/plugins. - Keep defaults safe. New features should work without requiring cloud credentials.
- Prefer explainable outputs. Include spans, labels, severity, and concise rationale in findings.
- Make tradeoffs explicit in PRs. Note precision/recall impact and latency impact for detector changes.
- Add evaluation evidence for model/prompt updates. Include key metrics in the PR description.
- Keep UX predictable. Avoid auto-destructive actions and keep user control in the sidepanel.
- Document config changes. If you add env vars or settings, update this README and any relevant module README.
- Implement
BrowserPlugininfrontend/src/plugins - Register in
frontend/src/plugins/registry.ts - Add settings/UX wiring if needed
- Implement
AnalyzerPlugininbackend/plugins - Register via
@PluginRegistry.register_analyzer - Ensure it surfaces through
/pluginsand supports/analyze(and/reformulateif applicable)
- Sidepanel cannot connect to backend:
- check backend is running on
http://localhost:8787 - click
Settings -> Plugin Server -> Connect
- check backend is running on
- WebLLM model fails to load:
- verify Chrome/WebGPU support
- try the smaller
qwenmodel first
- No highlights appear:
- confirm at least one browser plugin is enabled
- check sensitivity level
- inspect
Diagnosticsin the sidepanel
Active research/development project. APIs, plugin contracts, and UX may evolve quickly.
If you are planning production use, pin a release and validate behavior against your own dataset.
Funded by the European Union (ERC, VIGILIA, 101142229) and the Flanders AI Research program (FAIR). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the EU or the ERCEA.
This project is licensed under the MIT License.
If you use Vigil in your research, please cite:
@misc{kang2026vigil,
title = {Vigil: An Extensible System for Real-Time Detection and Mitigation of Cognitive Bias Triggers},
author = {Kang, Bo and Noels, Sander and De Bie, Tijl},
year = {2026}
}See CITATION.cff for machine-readable metadata.

