Inspiration

Unity started from a simple accessibility problem: a lot of web content is hard to process if you have dyslexia, color-vision differences, attention fatigue, or motor/speech constraints. Most tools either simplify text without proof, or provide “chat answers” that are not actually tied to the page. We wanted an assistant that helps people understand content in a way that is readable, navigable, and trustworthy, with evidence you can jump to directly.

What it does

Unity turns any tab into an accessibility-focused reading workspace.

For dyslexia and reading load:

  • Simplifies selected text into plain language.
  • Summarizes dense passages.
  • Supports forced fonts including OpenDyslexic.
  • Adds Reader Mode to isolate article content and reduce clutter.

For color-vision accessibility:

  • Provides a Color Blind Filter setting with Protanopia, Deuteranopia, Tritanopia, and Achromatopsia options. It applies page-wide SVG color-matrix filters (color-vision simulation) across webpages in real time.

For motion and sensory comfort:

  • Includes reduced-motion support to limit distracting animation behavior.

For navigation issues:

  • Every answer includes source chips that jump to exact supporting content.
  • On YouTube, source chips seek directly to transcript timestamps.

For motor/speech and input barriers:

  • Voice dictation for question input.
  • Audio read-aloud of selected text with follow-along highlighting.
  • Autofill profile + field detection to reduce repetitive form entry.

How we built it

Unity is a Chrome extension built with TypeScript, React, and WXT. The background service handles scanning, grounded chat, and per-tab session state. Content scripts handle on-page accessibility behaviors like Reader Mode, selection simplify/summarize actions, audio follow mode, and in-page YouTube controls. The popup provides unified controls for chat, reader, audio, profile, and autofill.

For trust and accessibility together, Unity extracts context from the current tab (article text or transcript), ranks evidence snippets, and generates answers constrained to that evidence. If evidence is weak, it avoids confident guessing.

Challenges we ran into

Accessibility features fail quickly if they are inconsistent across sites, so reliability was hard. Real pages are noisy and highly variable, which made article extraction and keyboard/mouse focus behavior difficult to stabilize. We also had to design color-blind and low-distraction modes that remain usable without relying on color-only feedback. On YouTube, transcript availability and timing consistency created extra complexity for navigation and source jumps.

Accomplishments that we’re proud of

We built Unity as more than a chatbot: it is an accessibility system centered on comprehension and navigation. We are proud of combining dyslexia-friendly reading controls, color-blind-aware UI cues, motion reduction, source-grounded answers, and jump-to-evidence behavior in one workflow. We are also proud of safety details like guarded autofill behavior and undo support, plus end-to-end tests for core accessibility paths.

What we learned

Accessibility is not one feature, it is a stack. Font/readability support, contrast cues, motion control, navigation affordances, and trustworthy grounding all need to work together. We also learned that “helpful AI” is only useful when users can verify it quickly, especially for people already dealing with cognitive or navigation friction.

What’s next for Unity

  • Improve color-blind presets with user-tunable contrast and pattern cues.
  • Strengthen keyboard-first navigation and screen-reader semantics across pop-up and in-page UI.

Built With

  • openrouter
  • react
  • shadcn
  • tailwindcss
  • vite
  • wxt
Share this project:

Updates