Inspiration
Like Mrs. Gray said, true accessibility is about giving the same opportunities to everyone. Through this project, we hope to enable even the smallest amount of exploration of our beautifully world for the visually impaired. In many parts of the world, guide dogs are a luxury and only a few people have access to them, although this program cannot provide all of the benefits of a guide dog, we hope that it will improve the accessibility of our world.
What it does
Using depth information, the program warns the user about upcoming obstacles such as pillars, walls, and even stairs.
How we built it
Using the MiDaS depth estimation model, we were able to gather real time depth information from a monocular camera. Then, we are able to use linear algebra to provide a digital summary of the obstacles around the user, ultimately using text-to-speech to provide an audio warning.
Challenges we ran into
It was very difficult to make sure the stairs detection were working, but using the engineering design process we were able to repeatedly test and modify our algorithm until it performed at a sufficient accuracy.
Accomplishments that we're proud of
We are proud of the success rate of this program, as you can observe from the video, the feedbacks are real time with 0.2s of processing time for each frame, this is fast enough for a person walking at a slow pace. We are satisfied with this since we have not yet performed optimization on the program.
What we learned
We learned a lot from this process, since none of us were too familiar with computer vision and machine learning. But, I believe our ambition paid off as we learned a lot from this process.
What's next for AuraVision
Our ultimate goal is to combine this with a wearable hardware device, which is why we have selected the MiDaS model specifically built for embedded devices that have less computing power. We hope to create a final product in the future that is useable and helpful.
Log in or sign up for Devpost to join the conversation.