Inspiration

Seeing people make products that help people diagnose Dyslexia through dyslexia fonts, or colour palettes that were made for colour blindness, we thought to combine these ideas and make a Chrome extension that works on Canvas LMS.

What it does

It is a Chrome extension that accesses the Canvas LMS using the user's API on the available PDF documents on the page. By clicking the modes ADHD, Colour Blindness, Dyslexia, and High Contrast, the user can choose options (or even mix and match them) to generate a formatted PDF opened in the next tab that uses research-proven techniques to help them comprehend the traditional PDF.

How we built it

We developed a Manifest V3 Chrome Extension using React and TypeScript for an accessible frontend and backend respectively, creating a frontend-backend design. The architecture uses TypeScript to fetch Canvas PDF documents by accessing the user's Canvas API and passes it to Gemini API to process the static pdf into more accessible html. We wrote our own customized prompt and sent it to Gemini via the GoogleGenerativeAI library, to help process our original PDF. The processed documents are then displayed in a separate tab. We also supported the display of math formulas using the Katex library. Finally, we used Vite to help us compile the typescript product into a chrome extension.

Challenges we ran into

One of the biggest issues that we ran into was when trying to work on the back end. Choosing Typescript was one of the more challenging parts, where choosing JavaScript or Python would have made the entire backend process easier. The security issue was also challenging as we were meddling with APIs and had to find a way to work this through the local host.

Accomplishments that we're proud of

  • Built a working model that follows all Chrome Extension rules and has no errors or warnings occur in TypeScript.
  • Use prompt technology to make user-end changes possible, in line with their needs
  • Considered safety in LLM response, purify and format the response to pure HTML, and use formal methods to append CSS formats.
  • Built an accessible panel that has all buttons possible to access with the tab key, and all buttons have text on them, so blind people can hear the button’s text when they select it, and simply pressing the button will make it work.
  • Utilize AI enhancement to provide the most accessible experience for the users.

What we learned

We learned how to use TypeScript better and work with the Canvas API. The process was hard as a whole, as this was one of our first few Hackathons, and making the files neat and properly connected required a lot of time.

What's next for Canvas Accessibility

  • Upload to the Chrome Extension Store: so that users can install it directly from the official place with little search, extra easy, fast, and safe!
  • More focus points: utilize other libs such as Electron to gain control of the whole screen, and place notes on the HTML page, so that it’s more ADHD friendly - those notes, or even just floating, moving highlights, will stand out and attract attention.
  • Provide customized window: allow the user to write their own prompt, add their own request, and provide a uniform API. So that they can form a community and produce more realistically helpful products based on their daily usage experience.
  • Faster & safer result: develop a LLM backend server to run fine fine-tuned local LLM model on the user’s device or rent a server to accelerate the requests privately - so no data is handed out to any third party.
  • Higher security: use Canvas' official authentication window to get the user’s Canvas API

Built With

Share this project:

Updates