👋 Learning ASL through VR
American Sign Language is the primary language of individuals hard-of-hearing in the United States. We were inspired by Meta's hand tracking and our values around accessibility to make ARSL (Augmented Reality Sign Language).
🫳 What it does
ARSL uses Computer Vision to detect objects through your Quest headset passthrough and translate detected objects into ASL alphabet. Then, hand tracking combined with haptic vibration feedback guides you through spelling the object.
👊 How We Built It
We developed an AR application that interfaces with a TCP server to classify everyday objects, aimed at teaching sign language through augmented reality. To enhance user experience, we created a vibrating glove that provides haptic feedback, indicating how close the user's fingers are to the corresponding sign.
💪 Challenges We Ran Into
- FIU Wi-Fi Limitations: Unstable internet connectivity and general campus security measures impacted our real-time image classification and server communication (primarily TCP).
- Project Scope: Balancing the extensive features while keeping the project manageable was a constant challenge.
- Image Classification: Achieving accurate and fast classification of images posed significant technical hurdles.
- Hand Pose Recognition: Ensuring minimal latency in recognizing hand poses was critical for an effective user experience.
🙌 Accomplishments We're Proud Of
- Successfully integrated AR technology with real-time image classification.
- Developed a functional vibrating glove that enhances user interaction.
- Created an intuitive user interface that effectively teaches sign language concepts in an Augmented Reality environment.
- Fostered collaboration and teamwork under tight deadlines, leading to a cohesive final product.
- The constant determination of the team to continue to preserver on in the face of adversity
🫶 What We Learned
- The importance of a reliable network connection for real-time applications.
- Effective project management techniques to tackle large, complex projects.
- How to optimize machine learning models for faster image and pose recognition.
- The value of user testing and feedback in refining our application for better usability.
🤞 What's next for ARSL - Augmented Reality Sign Language
We have so many ideas for the future of ARSL! Some feasible goals for the near future would be improving gesture detection accuracy, refining object detection at greater distances, and enhancing the user experience with cleaner visuals. An additional, word-focused mode would also be an excellent feature to bolster users' ASL education (which was dramatically improved during the course of the 36 hour project!).







Log in or sign up for Devpost to join the conversation.