Submission to Inequality Track

Inspiration

The initial inspiration for ASL to Action came from watching a conductor at a recent symphony I attended in order to support my friend who was performing cello. I admired the way the conductor was able to use a hand and baton to drive action from all of those musicians with just simple passionate gestures. That night I went to bed, setting my Google Home alarm, only for it to go off in the middle of the night when my voice was too tired and scratchy to effectively tell it to stop. I thought back to the symphony and laid there wishing I could have signaled a “rest” or off motion with my hands instead of using it to take a sip of water so I could clear my throat and my assistant could understand me. Though there is a physical touch shortcut, the next day I wondered what would it be like to use an assistant if I wasn't someone who spoke verbally, especially using my phone? I thought of just using the typing commands, but then I wondered why people who spoke verbally had more options for communication than those who did not. I brushed the idea off until the night before HackDuke when I went to see “Dune” in theaters. In the film, (semi-spoiler alert but not really) one of the characters uses a singular hand to communicate non-verbal, discrete messages of danger and alert guards on multiple occasions. So, I then (as I was building our team) connected using singular hand gestures to having powerful outcomes and saving a life, which happen to be the two things technology is very good at, to recognizing the powerful element in being able to use technology privately and faster than typing speed.

What it does

ASL to Action is an application that utilizes sign language to simplify the fundamental daily action of communicating with our mobile devices. By integrating a language that speaks through silence in an online space, we bridged the gap between virtual thought and human movements. There’s now more to how we can control our devices to search and text our thoughts by letting our hands speak for us.

How we built it

We built this using Google's Teachable Machine and Wix. The sign detection and recognition was generated using Google’s Teachable Machine tool. A virtual model was trained for Tensorflow and JavaScript with a team generated dataset, and then uploaded to a demo site hosted by Google. When using Wix, a powerful tool to build and publish a website without buying web hosting or installing software, we started by selecting the template designed for tech websites and adding/removing the necessary sections.

Challenges we ran into

We originally planned to tackle the machine learning portion of ASL to Action using OpenCV and Tensorflow, but encountered several machine-based challenges along the way. We were unable to train the model on the hardware the team had available, and spent several hours searching for an alternate method.

Accomplishments that we're proud of

  • Managed to finalize an AI model, website, and application in a way that successfully aligned with our ideas and effectively depicted our intentions.
  • Accomplished the strain of publishing a complete and functioning project in a highly limited time constraint.
  • Built a collaborative and organized space that promoted nothing short of the energy and aptitude put forth in our success to equally split our workload and finish on time.

What we learned

Overall, we now have a much better conceptualization of machine learning and how it can be implemented. Although we didn’t use our original model and dataset, we did learn how ML works behind the scenes and gain understanding of how to develop social solutions using machine learning in development. We also learned how to bridge the functionality of human movements with that of the virtual world to program a working outcome.

What's next for ASL to Action

We plan on turning this into a fully functioning web app that could benefit thousands of people, thus, bridging the gap between verbal and non-verbal individuals.

Share this project:

Updates