Inspiration
https://www.eyelexa.com/ ^This is an older build, but our demo is actually working 😦
Improvements in technology have decreased the need for the visually impaired to learn braille, thanks to the introduction of ebooks, alongside text-to-speech technology. However, only a handful of books have an ebook counterpart, and only a handful of those have TTS support. A study done in the United Kingdom and Ireland cites that “only around 10% of current academic titles are available as ebooks”. This is only made worse by the fact that many ebook publishers do not include a TTS function. In fact, some ebook publishers have made it impossible for TTS browser extensions to read the text on the screen, making certain books impossible for the visually impaired to access.
The theme of HackCamp revolves around inclusivity, diversity, and accessibility. We saw that the lack of accessibility that individuals with visual impairments faced, in trying to attain what we as students deem are essential, failed to include the diverse needs of people. Thus, we created EyeLexa, a combination of the English word “eye”, and lex, which in Greek means “speaking” or “speech”. EyeLexa is a solution we developed in order to tackle the accessibility issues that many visually impaired people may have come across.
What it does
EyeLexa was designed to help the visually impaired read print books when their only option is to find someone to read the book aloud. Suppose a visually impaired reader only has access to a physical copy of a book, possibly because the publisher did not release an ebook version, or because their local library carries the physical copy but not the ebook version due to expensive subscription prices. Now with EyeLexa, anyone can read physical books easily, with only their computer, and a webcam or a phone.
After opening our website, all the user has to do is place their reading material in the correct position, and use their keyboard. Keeping accessibility in mind, we made all controls on the website accessible with a keyboard without the need for visual input, although still displaying optional visuals for those who are not visually impaired.
How we built it
When the user takes a picture on the browser, which was built with HTML, CSS, and React.js, it sends the image to the backend server using an HTTPS request. That HTTPS request is received by our backend server made with Express and NodeJS, which uses Google Cloud Vision API to determine the text on the screen, and it returns it back to the frontend to be dictated back to the user.
Challenges we ran into
- Connecting the front end and back end
- Figuring out the best way to use the camera screenshot and keyboard together
Accomplishments that we're proud of
- Easy use for the user
- Speed and Performance of the program
- Use of ReactJS in the front end design
- Full Stack Application!
What we learned
- How to send HTTPRequests, change parameters, etc.
- React, Google API
- Troubleshooting and Debugging (a lot....)
What's next for EyeLexa?
- Mobile Interactivity (use your phone, mobile app)
- Database integration (save pages to go back to later)





Log in or sign up for Devpost to join the conversation.