Inspiration
Learning how to communicate with impaired family members, and friends are what motivated us to create SignBuddy.
What it does
At Sign Buddy, we aim to help people learn and improve ASL by providing them with fun and interactive exercises.
We do this by providing 2 key features: Buddy Learn 📚 & Buddy Translate 🤖.
Buddy Translate prompts the user for a word, then a corresponding Youtube video for that word is displayed. This feature works by quickly scanning a stored dataset of processed videos with little to no wait time.
Buddy Learn is a key feature that thoroughly scans and outputs what the user is trying to sign. It predicts the sign in real-time using the help of a trained AI model in Tensorflow, Keras, and OpenCV. It then outputs the scanned words to allow the user to generate words and sentences.
How we built it
We built the web app using Flask, the machine learning model was built using Tensorflow, Keras, and Open CV. The trained model is used to scan the user's inputs and generate words and sentences using that. The buddy translate function was built by parsing a JSON file and using that data to generate youtube videos depending on the user input.
Challenges we ran into
• Sourcing the data set to train the AI model • Parsing the WLASL sign language dataset for Buddy Translate
Accomplishments that we're proud of
• Parsing the data successfully and generating a video based on that • Building a custom dataset
What's next for Sign Buddy
When looking forward, this project has a bright future! First of all, we ran out of time to implement a practice feature, which gives the user a random word from the dataset. The user is then prompted to enter a sign into the webcam and tries to sign the given word. Next, due to our scope and time constraints, we decided to train our AI model with a limited dataset compared to our available words, as more dynamic signs were harder to track and train. The 12,000 instances of words in the WLASL are a perfect dataset that will be able to train a model that can handle more dynamic and complex gestures. This idea can also be brought to mobile devices, as it would be a fitting feature for language learning apps such as DuoLingo.
Log in or sign up for Devpost to join the conversation.