Inspiration
Browsing the web is something most people take for granted, but for millions of users it presents real barriers. People with dyslexia struggle with dense, poorly formatted text. Those with low vision or age-related sight loss find small fonts, low contrast, and cluttered layouts exhausting to navigate. Elderly users in particular are among the fastest-growing demographics online, yet most websites are not designed with their needs in mind.
Roughly 1 in 5 people has dyslexia, and over 2.2 billion people worldwide live with some form of visual impairment. As populations age, these numbers will only grow — yet the default web experience remains largely unchanged.
Our extension addresses this gap directly. Rather than waiting for websites to become accessible, we bring accessibility to the user — letting them customize font, spacing, contrast, cursor size, and reading difficulty on any page they visit. AI-powered features like image captioning, page summarization, and difficulty adjustment make even complex or image-heavy pages approachable for users who would otherwise be excluded.
Instead of having to suffer through pages of text they struggle to comprehend, users only have to navigate to the extensions bar, select Accessify, and customize their web browsing experience in one click.
What it does
Our project is a chrome extension that makes the web accessible for everyone — especially elderly users, people with dyslexia, and those with low vision. The user has a variety of controls to choose from: adjustable fonts, spacing, contrast, cursor size, reading difficulty sensing, and more. We also integrated the latest AI technology, all in one place: page summaries with adaptive reading comprehension difficulty, image captions, and text-to-speech.
How we built it
We used Opennote for the brainstorming and planning stage of our project. We summarized notes, organized steps for our process, and shaped our collaborative work into a polished final document. Then, using ReactJS we developed our frontend, with core logic in JavaScript. We used CSS for a clean and responsive user interface. For our AI features, we integrated Featherless.ai’s Gemma model through Hugging Face to dynamically simplify and generate text at multiple difficulty levels, enabling nuanced control over text complexity. We also used the ElevenLabs API, enabling high-quality speech output that only reads highlighted paragraphs in order to give users greater control in our effort to ensure audio accessibility.
Challenges we ran into
A major challenge we faced was understanding how to design user interfaces for people with disabilities. Our team spent a lot of time gathering the types of features we needed on our extension and figuring out how to list them in a user-friendly way. We learned about tools like bionic reading material and dyslexia friendly font, as well as the importance of line and letter spacing. It was tough to create features that would benefit multiple demographics, but we emerged as better developers after creating the product. We also had to work out many small bugs - debugging took up much of the development process! We faced issues with our API calls and with consistent formatting across many different webpages. However, we resolved these issues and had to manage our time efficiently in order to create a finished product.
Accomplishments that we're proud of
We are very proud that we were able to successfully integrate multiple AI-powered features into our product. We organized everything on Opennote, and created multimodal features - visual and textual aids throuh Featherless.ai and sound-based aids through ElevenLabs. These features were crucial for the success of our project, and we are very grateful that BISV gave us this opportunity. We tried to create a project that would address some of the difficulties that almost 25% of the population faces. There is a huge population of people for whom the web browsing experience is not optimized, and we aimed to bridge that gap.
What we learned
We learned about text-to-speech implementation, as well as image captioning. These are some very cutting-edge, exciting technologies, and we hope to expand upon our work in future projects. We learned about how to collaborate effectively! One of our team members had to work from a remote location, and we divided up tasks and reviewed each other's code in an efficient way in order to have the most streamlined development process possible.
What's next for Accessify
We hope to put Accessify on the Chrome Webstore and release it as an extension that people can download. We also would like to create a mobile version of Accessify - accessibility should not only be limited to the web, and should be available for all devices. We'd also like to add more features and reach even more people.
Built With
- 11elevenlabs
- ai
- chrome
- css
- elevenlabs
- featherless
- html
- huggingface
- javascript
- react
- texttospeech
Log in or sign up for Devpost to join the conversation.