Snap it, Read it, Live it!
Inspiration
Our team's inspiration for this project comes from our shared passion for accessibility. We believe that technology has the power to level the playing field for individuals with disabilities, and we wanted to use our skills to create something that could make a real difference. We recognized that dyslexic individuals often struggle with reading text from images, which can be a significant barrier to learning and information access. By leveraging the power of Flutter, we aim to design a beautiful and intuitive mobile app that can help dyslexic individuals read text from images more easily. In doing so, we hope to promote greater accessibility and inclusivity for all. This led to the birth of Speechify - an app that can make a real difference in the lives of dyslexic individuals.
What it does
Are you tired of struggling to read text from images? Do you have dyslexia or are visually impaired and find it challenging to access information? If so, we have the perfect solution for you - our revolutionary new app that reads text from images!
Our app is specifically designed to cater to the needs of people with dyslexia and visual impairments. It uses advanced technology to instantly recognize text from images and convert it into an easy-to-read format.
But that's not all! Our app also has a text-to-speech feature, which reads the text aloud for visually impaired individuals. It's incredibly user-friendly and can be used to read all kinds of text, from books to handwritten notes.
So, whether you're dyslexic, visually impaired, or simply want a more accessible way to read text from images, our app has got you covered. Say goodbye to the frustration of struggling to read and hello to a new world of convenience and accessibility with our app!
How we built it
The development process of Speechify involved a variety of technologies and techniques. To create the frontend of the app, we utilized the Flutter framework, which allowed us to build a visually stunning and responsive UI. We implemented the backend in Python, which handled the processing of the image data through OpenCV and PyTesseract, and returned the extracted text via an HTTP GET request to the frontend.
We used Dart programming to link these two components, enabling smooth communication between the frontend and backend. Additionally, we incorporated Flutter's text-to-speech capabilities, allowing users to have the option of listening to the extracted text instead of reading it.
Each component of the app was styled with deep intricacy, taking into account both functionality and aesthetic appeal. The user interface was designed to be intuitive and easy to navigate, with a focus on accessibility and inclusivity. We also incorporated machine learning models from PyTesseract to improve the accuracy of the text extraction process. Pre-processing techniques using OpenCV were employed to make the image more usable by the machine learning model. The codebase was managed for our team using GitHub.
Challenges we ran into
As with any project, our team faced a few challenges while developing our app that reads text from images. However, we were determined to overcome these obstacles and create a product that would truly make a difference in the lives of people with dyslexia and visual impairments.
One of the biggest challenges we faced while developing our app that reads text from images was getting the text-to-speech feature to work correctly. It was essential that the app read the text accurately and in a way that was easy for visually impaired individuals to understand. However, this proved to be a difficult task as there are many nuances to language that can be challenging for machines to accurately interpret and vocalize.
To make an accurate text recognition technology, it was essential that the app be able to accurately recognize and convert text from a wide range of sources, including books, signs, and even handwritten notes. We spent countless hours fine-tuning the technology to ensure that it could accurately recognize text under a variety of lighting and background conditions.
Despite these challenges, our team was able to overcome them through hard work, dedication, and collaboration. We are proud to have developed an app that truly makes a difference in the lives of people with dyslexia and visual impairments, and we look forward to continuing to improve and expand its capabilities in the future.
Accomplishments that we're proud of
One of the accomplishments that we are particularly proud of is how well we worked together as a team during the development process of Speechify. Despite the tight deadline of the hackathon, we were able to collaborate effectively, share ideas, and divide tasks efficiently to maximize our productivity.
Another accomplishment that we are proud of is our ability to learn and implement new technologies quickly. In less than two days, we were able to familiarize ourselves with technologies such as Flutter, Dart programming, OpenCV, PyTesseract, and HTTP requests. We were able to integrate these technologies seamlessly into the app and create a fully functional prototype that met our goals and exceeded our expectations.
What we learned
The hackathon provided us with an opportunity to learn Flutter and Flask for web development, work efficiently under pressure, and solve technical difficulties with Xcode. We learned the importance of teamwork, persistence, and creativity when facing challenges. The experience helped us develop our technical skills, particularly in OCR and text-to-speech. Overall, it was a valuable learning experience that we can apply to future projects.
What's next for Speechify
The next steps for Speechify are to focus on increasing accessibility for visually impaired and blind users. This will involve adding more auditory cues and voice commands to improve the user experience. Additionally, the app will use more powerful OCR and text-to-speech algorithms to enhance accuracy and speed. These improvements will enable Speechify to better serve its target market and expand its reach to more users.


Log in or sign up for Devpost to join the conversation.