Inspiration
We were inspired to create this project by one of our team members, Karthik, whose cousin has a disability that limits his ability to communicate verbally. By developing a web application that translates his sign language into readable text, we aim to bridge the communication gap and make it easier for us and others like us to connect with him. This solution is driven by our desire to leverage technology for meaningful impact, enabling more inclusive interactions with those who rely on non-verbal communication.
What it does
Our project is a web application that uses AI to recognize and interpret sign language gestures, converting them into text in real-time. This allows individuals who use sign language to communicate seamlessly with those who may not understand it, bridging the communication gap. The application utilizes a machine learning model to identify specific hand signs through a webcam feed and instantly translates them into readable text, providing a simple and accessible communication tool for both personal and professional interactions.
How we built it
We built our application using a combination of TensorFlow.js, JavaScript, HTML, and CSS. The core functionality relies on TensorFlow.js to run a pre-trained machine learning model directly in the browser, allowing it to recognize hand signs in real-time using the device's webcam. We used JavaScript to handle the logic for processing video input and feeding it into the model, while HTML and CSS were utilized to create a user-friendly interface that ensures a smooth experience. By leveraging these technologies, we developed a lightweight, accessible, and fully web-based solution without the need for additional software installations.
Challenges we ran into
One of the biggest challenges we faced was training the model to accurately recognize various hand signs, especially given the variability in lighting conditions, hand sizes, and positioning. Integrating TensorFlow.js for real-time recognition presented its own set of difficulties, as optimizing the performance while maintaining accuracy required fine-tuning. Additionally, working with webcam input and ensuring that the application ran smoothly across different devices and browsers was more complex than we anticipated. Debugging and testing the model's responsiveness also took longer than expected, but overcoming these hurdles taught us a lot about the intricacies of building AI-powered web applications.
Accomplishments that we're proud of
We’re incredibly proud of successfully building a real-time sign language recognition tool that can effectively translate hand signs into text. The moment we saw our AI model accurately interpret gestures from the webcam was a huge win, especially given the initial challenges we faced with training and optimizing it. Additionally, creating a fully web-based application using TensorFlow.js allowed us to make the solution accessible to anyone with an internet connection, which aligns with our goal of inclusivity. We’re also proud of how we came together as a team, leveraging each member’s strengths to overcome technical obstacles and bring our vision to life within the tight timeframe of the hackathon.
What we learned
Throughout this project, we learned a great deal about the complexities of implementing machine learning models in a web environment using TensorFlow.js. We gained hands-on experience with optimizing real-time video processing, which deepened our understanding of how to balance accuracy and performance. Additionally, we learned the importance of collaboration, as each of us brought different skills to the table, whether it was frontend development, AI integration, or debugging. The process also taught us how critical user experience is when creating accessible applications, reinforcing our commitment to designing tech solutions that are both functional and inclusive.
What's next for SignLang AI
Moving forward, we plan to enhance SignLang AI by expanding the range of recognized gestures and refining the model to handle more complex sign language phrases and sentences. We also aim to incorporate voice output, enabling the translated text to be read aloud for even smoother communication. Additionally, we’re considering integrating multilingual support to cater to sign languages from different regions.
Built With
- css
- html
- javascript
- tensorflow.js
Log in or sign up for Devpost to join the conversation.