Inspiration
I can't go hiking with my grandparents anymore. In the last few years, it's become too difficult for them to climb steep stairs or uneven terrain without being prepared in advance. And without really knowing the conditions on a trail, we can't go.
It's even worse for people with limited sight or hearing. Limited sight makes it hard to stay on a trail and avoid obstacles without help. And the world has been getting increasingly dangerous for those with limited hearing due to the advent of large, fast-moving, and mostly silent autonomous vehicles.
The Seeing Eye Vest is our suite of low-cost, open-source hardware and software devices made to help people with seeing, hearing, and motion disabilities navigate the outdoors more easily.
How we built it
We first created the vest itself, a subtle undershirt with an integrated grid of forty-two vibration motors. Efficient electronics enable individual control of each motor. The 3D world around the vest is mapped onto the grid — the closer an obstacle gets, the stronger the vest vibrates in a given direction. This gives our user a type of extrasensory haptic "sight" that's independent of sight or hearing.
We also created Flysight — a drone capable of semi-autonomous flight and mapping of an environment. With binocular cameras onboard, you can send Flysight ahead to check out a trail before you go. And its integrated mapping capabilities mean the Seeing Eye Vest itself can localize to a known environment and work even better. Soon, we might even be able to expand the framework with automatic pathfinding to avoid steep hills or hazardous terrain.
Challenges we ran into
On the hardware side, controlling forty-two separate motors was extremely difficult. We designed two custom motor controller boards with over twenty transistor circuits hand-soldered on each. Then we spent multiple hours creating a wiring harness for the vest itself. We also had to develop a novel way of creating pockets with the vinyl heat press to mount each individual vibration motor.
On the software side, we had a huge amount of trouble integrating cameras, binocular vision, and SLAM (simultaneous localization and mapping). We started out with the ORB-SLAM-3 pipeline, which is near state-of-the-art for low-cost embedded systems like ours. However, we ran into compilation issues on our Raspberry Pis and computational limits. We ended up running ORB-SLAM on our desktop and streaming imaging data back and forth. This meant we needed local depth cameras, like smartphone cameras, to reduce latency and make the vest work while simultaneously mapping the environment.
Accomplishments that we're proud of
We're extremely happy with our clean electronics box and networked software and hardware. To get enough output pins to control all our vibration motors, we had to network 5 Pi Pico microcontrollers over a serial connection. And since SLAM and depth-mapping is such an intensive operation, we had to integrate multiple Raspberry Pis as well on the drone and vest.
Finally, we're just happy we made such a useful system.
What we learned
We learned a lot about localization and mapping and about transistor-level logic. Both should be useful — Julie and Ava are building a robot with a similar SLAM stack for their capstone this quarter, and we're all likely to work in the mechatronics field in the future :)
What's next for The Seeing Vest
We're going to continue working to make the system more robust. It'll also be easy to miniaturize the electronics, which are currently the size of a small backpack. If this was a commercial product instead of a hackathon project, we could fit it all on a PCB the size of a credit card.
Built With
- google-maps
- javascript
- mavsdk
- opencv
- orb-slam
- pixhawk
- python
- ros
- slam
- sveltekit
Log in or sign up for Devpost to join the conversation.