Inspiration

Our inspiration came from a tiny spark of an elaborate idea. We were clueless about what to make and when we decided to pick up materials from MLH, a Leap Motion Sensor and an Echo Show 5. The Echo Show 5 had a camera and was a VOICE assistant... A VOICE ASSISTANT!! That was it, how can a mute or deaf person communicate with these devices and make their lives as easy as ours are today? Our idea was born from that, now this idea was not just useful for those who can't hear or talk, it was also helpful for the everyday person who just wants to quickly change something like the temperature or lights.

What it does

This product can use the webcam and hand-tracking technology to identify certain gestures, some of which are for controlling volume, lights, temperature, and other home appliances. You can also talk to the interface in ASL or American Sign Language. This helps to increase inclusion from deaf and mute people, as almost 30% of all deaf people have been in depression.

How we built it

We divided the roles to make sure everyone played a part, while also doing what they were best at. The four groups were AR, Gestures, Modelling/Hardware, and Arduino. This helped increase collaboration and also encouraged more ideas to be shared while also staying on task with finishing the assigned work. AR was the real-world touch aspect of the project. Gestures were the way to communicate and respond. Modeling/Hardware was the feel of the project, which helped build a designated product and example. Arduino made everything connect and connected the hardware to the software.

Challenges we ran into

We ran into many challenges over the past two days. At the start of the hackathon, finding ideas that met most of the requirements was very hard to do. Then, picking the criteria for our product became a challenge. Implementing that criterion is much harder, as the criteria tend to get very vague. Once those obstacles were overcome, the Raspberry Pi was being very inefficient and took too much work. Moving away from that was hard and with the time constraint, it was even harder. 5 hours before the deadline, we still are working on connecting all of our parts and finishing them up. We still needed to finish up some more things to finish each part. Time was ticking a little too fast.

Accomplishments that we're proud of

We each learned a new thing from the Hackathon. From learning how to experiment with Bluetooth to using AR, we each had a different struggle and a different accomplishment. From the Bluetooth not being connected to connecting them until you cannot forget. From the small gestures with many inaccuracies to a program that equips machine learning to interpret American Sign Language. From not knowing a hackathon to have first-hand experience and knowledge of everything you need to know in a hackathon. From having no experience with AR to writing an entire interface that uses AR to function. All of us have our accomplishments throughout the competition, but we all share one accomplishment, and that was making this app after many troubles and hardships.

What we learned

We learned quite a bit from this hackathon. We learned how to work with Bluetooth devices and interconnect them. We learned how to code in gestures and an entire language not known to most but necessary to some. We learned how to go through a hackathon together without a disaster. We learned how to connect our real world to virtual reality. We learned skills that we will never forget as a team, accounting for everyone inside the team, too.

What's next for GestAR

We will bring in a better interface by manufacturing a specific product only meant for the app. We will also add an ability to customize the product's gestures completely to help consumers use the gestures they are most comfortable with. We will also add a separate infrared camera to help the camera's accuracy to improve. GestAR will be the starting point of a great invention that could help thousands of deaf and mute people across the world.

+ 3 more
Share this project:

Updates