Group Name: Codies
Group Number: 08

Inspiration

Creating SignFlow was driven by our desire to meet the diverse needs of a wide audience, blending the general public's needs with our personal experiences. We realized that everyone wants to be heard in their own way, and we saw a significant gap in how traditional video conferencing platforms serve the Deaf and Hard-of-Hearing (DHH) community. While captions exist for spoken language, sign language users often have to rely on texting or vocal interpreters, which are not always available or efficient. For many in the DHH community, typing out responses during meetings disrupts the flow of conversation and isn't time-effective. They deserve to be heard just like everyone else, without relying on extra steps that disrupt their participation. Communication should be seamless and available for all, and that’s what we aimed to achieve with our mobile app, SignFlow.

With over 72 million people worldwide using sign language as their primary form of communication, our team felt it was necessary to create a platform that supports this language. Real-time ASL gesture translation is our solution to making virtual meetings more inclusive and accessible. SignFlow empowers DHH individuals to communicate directly through their gestures, without interruptions or delays, ensuring that everyone can engage fully, collaborate effectively, and feel truly heard in every conversation.

What it does

By combining the need for accessible communication with innovative technology, we developed SignFlow, a video meeting platform designed to empower the Deaf and Hard-of-Hearing (DHH) community. Recognizing that traditional video platforms often leave sign language users behind, we created SignFlow to translate ASL gestures into real-time text, allowing for seamless communication without interruptions. Users can engage in meetings, collaborate with others, and feel heard—all without relying on extra steps like typing or waiting for interpreters. Our goal is to make virtual meetings more inclusive, ensuring that everyone can participate fully, communicate effectively, and feel truly valued in every conversation.

How we built it

Design:
Figma – Used to design the user interface and creating a prototype that ensured a seamless user experience.

Backend:
Python - Handled the backend logic and machine learning model integration
TensorFlow - Used to train and deploy the machine learning models for gesture recognition
Mediapipe - Processed webcam data to detect and extract ASL gestures in real time
Scikit-learn - Used for data preprocessing and model evaluation
OpenCV - Used for processing webcam data, handling image frames, and managing real-time video feed for gesture recognition

Challenges we ran into

One of the biggest challenges faced was integrating the backend code, which was initially written in Jupyter Notebook, into a web or app environment. TensorFlow, scikit-learn, and OpenCV were used for gesture recognition and machine learning, but everything was set up locally on the machine, not in the cloud. When it came time to move the backend to the web, deploying the models and ensuring they ran smoothly in this new environment was difficult. While the Jupyter Notebook setup worked well for testing, the code didn’t transfer well to the web. Additionally, OpenCV, which worked well for processing webcam data locally was difficult when trying to integrate it into the frontend for real-time performance.

Accomplishments that we're proud of

We’re proud of the beautiful user interface we created in Figma, focusing on accessibility to ensure that the app is user-friendly for everyone, especially the Deaf and Hard-of-Hearing (DHH) community. The design is clean and accessible, making it easy for users to navigate the app and engage with its features. Another accomplishment is our sign language recognition model, which is able to recognize numbers. This showcases the model’s potential for understanding ASL gestures in real-time. Although the model is still developing, it can detect ASL alphabets and key phrases like "I am" which provides a strong foundation for future improvements, and for recognizing more complex gestures in the future.

What we learned

Throughout this project, we learned how to use openCV, Mediapipe, scikit-learn, and Figma.. Working with OpenCV helped us understand real-time video processing, allowing us to capture webcam data and to use it with our gesture recognition system. Mediapipe was essential for detecting ASL gestures, and we learned how to extract landmarks for real-time recognition. With scikit-learn, we got experience in data preprocessing and model evaluation, which helped us improve the LSTM model. We also fully explored Figma to design a user interface focused on accessibility, making sure that our app was easy to use for everyone. Learning all of these tools was a huge part of this project, and it helped us improve our technical skills while also prioritizing an accessible and clean user interface.

What's next for SignFlow

In the future, we would love to build a responsive frontend using frameworks like React or Flutter. These frameworks would allow us to create interactive UIs that can easily adapt across web and mobile platforms, allowing for an accessible experience for users on any device. We also plan to expand the machine learning model to recognize more complex ASL gestures, improving the app’s ability to translate a wider range of sign language. This would make SignFlow even more inclusive, allowing for easier communication and a more complete sign language experience.

Built With

Share this project:

Updates