Inspiration We were inspired by the belief that communication should never limit a person’s ability to interact with the world around them. Sign language is a rich, expressive medium, yet many systems are not designed to understand it. Our idea stemmed from a desire to bridge the gap between silent communication and actionable outcomes—enabling people who use sign language to control their environments naturally. We wanted to explore how technology could see, interpret, and respond to gestures with meaning and respect. What it does ASL Assist allows users to control smart home devices using American Sign Language. Through a camera interface, a user can perform a simple ASL gesture which is recognized and translated into a command, such as turning on the lights, locking the door, or playing music. The system interprets these gestures using a trained model and sends corresponding signals to smart devices. This empowers users to interact with their homes in an intuitive and accessible way, turning sign language into a tool for seamless automation. How we built it We trained a custom image classification model using Azure Custom Vision. The training dataset was sourced from Kaggle and consisted of labeled ASL gestures. We fed these images into the Azure platform to create a classifier capable of distinguishing between several common signs. Once trained, the model was integrated into a Python-based interface that takes an image input from a webcam, runs it through the classifier, and maps the output to a predefined home automation command. These commands were then routed through smart device APIs to control elements like lighting and audio playback. Challenges we ran into One major challenge was curating and preparing the dataset. Some signs were visually similar and prone to misclassification, especially under different lighting conditions or angles. Ensuring the model could generalize to new users and varied environments required significant tuning. We also encountered challenges in creating a responsive system that could interpret input quickly and execute commands without delay. Integrating the gesture detection with real-world devices involved navigating compatibility issues and ensuring reliability under different use cases. Accomplishments that we're proud of We’re proud of building a fully functioning prototype that not only recognizes ASL gestures but uses them to control actual smart devices. We succeeded in creating a tool that promotes accessibility and demonstrates the power of inclusive design. Our project didn’t just prove that gesture-based control is possible—it showed that it can be smooth, intuitive, and meaningful. It’s especially rewarding to know that this solution could have real impact for people who communicate through sign language. What we learned Throughout this project, we gained a deeper understanding of how to train and deploy computer vision models in the Azure ecosystem. We learned how subtle details—like lighting, gesture variation, and camera quality—can significantly affect recognition performance. We also explored how human-centered design can guide technical development, reminding us that good technology is built with empathy. This experience reinforced the importance of accessibility, not just as a feature but as a foundation. What’s next for ASL Assist We plan to expand our gesture library to support more commands and eventually full conversational phrases. We want to move beyond static images to continuous video input for more fluid and real-time interaction. A future goal is to implement bidirectional communication, allowing the system to respond to users through synthesized speech or visual sign language output. Additionally, we hope to bring the model on-device for better privacy and offline functionality. Long term, we envision a customizable interface where users can map their own gestures to commands, making the system even more personal and powerful.
Log in or sign up for Devpost to join the conversation.