Description
We wanted to make a simple, clean, and helpful project. Even in today's technology-driven world, there is still a significant communication barrier when interacting with an individual who utilizes American Sign Language. This is our take on a potential solution to that communication barrier.
ASL Vision utilizes computer vision to interpret American sign language.
Challenges we ran into
- Training our CV model
- Incorporating our CV model into our project properly!
- Minor design issues
- Uploading to the Google Cloud database
Accomplishments that we're proud of
Each member of the team got to try something new. We all came in with different backgrounds and all picked up new skills and faced new challenges.
What we learned
- Google Cloud API interface and connections
- Machine Learning
- Front-end development
- Teamwork
- Converting images to base 64
What's next for ASL Vision
The future of ASL Vision would likely be an AR Glasses app. Being able to interpret ASL without having to pull your phone out and take a picture would be the best-case scenario. In addition, cooperating with Google translate to allow for easier access.
Log in or sign up for Devpost to join the conversation.