Inspiration💡

Abstract Mobility for blind people is a problem as they face many difficulties in their daily lives. Most commonly, a cane is used by blind👩‍🦯👨‍🦯 and older🧓👴 people to support their bodies to stand and walk🚶. The most common disadvantage of a walking stick is that the user should be close to the obstacle so that he can sense the location of the obstacle to avoid bumping while waggling. So this inspired us to think of the problem.

What it does🚩

We made a device that will help visually impaired people to guide. The device will use GPS to help the family members to keep track of him. The directions to the man can be given to him using a speaker. So, it will help visually impaired people go wherever they want without anyone's help.

How we built it🧱⚙️

We used Fritzing to make a circuit diagram and Fusion360 to make a CAD model which we have attached in the photos section. In VISION first, the esp32cam 📹will capture the images to the particular server. So it processes the image using the Haar Cascade algorithm. So here, our laptop is connected to an esp 32 cam server. Our device access the image data, it processes the data and gets the output, then the laptop will pass the output to the esp32. Then esp32 will access the output; after accessing the output, it will start looking for obstacles using HCSR04, comparing the output and giving the individual audio output. Furthermore, here GPS is used to send a particular person's location to their family using the Blynk app.

Challenges we ran into🧗

The first challenge was with programming the sensors since it lead us to errors. The second challenge was designing a CAD model using Fusion360 since we didn't have much knowledge of the software and how to use it effectively. The third challenge was making a circuit diagram using Fritzing since we had a lot of trouble with packages. The last challenge was Integrating individual parts. Integrating each part in the correct order is the most challenging part.

Accomplishments that we are proud of🏆

We learned how to use GitHub pages to host our site; we made some commits and pull requests using GitHub and collaborated with our remaining team. We have even learned how to deploy our website using Github Pages. We are proud that we all have applied for Github Global Campus. We learned how to use Co:here NLP technology (generate, embed, classify). We even generated a story of a blind man using our cane or Device using co:here. Also, we made greetings for MLH to wish them a good birthday. We used Diamondapp, supernovas (DeSo network), created a few NFTs, and even made generated text by co:here into NFT, which you can check in try-out links. We learned how to use Fusion360 and Fritzing for designing CAD models and circuit diagrams. We learned how to use eps32 and camera along with sensors; we got to explore open cv libraries. We are proud that we have built a device that helps blind people overcome the difficulties they face daily.

What we learned🏫

We have learned to design a cad model using Arduino ide, a series of microcontrollers, IOT using Blynk, and even fusion360. Learned how to use co:here to give out good results and ease our work and also how to make GitHub pull requests.

What is next for VISION-THE GUIDE🔮

In the future, using Artificial Intelligence we can help the blind person even by reading books and doing any external activities. We are planning to use Twilio to make communications easy between system and user and even thinking of using ai for predictions of the user and act accordingly in such a way that our device covers a wide range of problems that visually impaired people face.

Fun activity we did😹

We added ASCII art into our code,check it out 😗 Don't forgot to check our website

Built With

Share this project:

Updates