Our Sign Language Learning App in VR is designed to help users learn sign language effectively and immerse themselves in a virtual environment that facilitates learning and interaction. The app provides

Links

Inspiration

Everyone is temporarily abled, and someday, we might lose our ability to speak. One of our team members learned Tzuchi Sign Language, which is similar to Taiwanese Sign Language, when she was in high school. She is not deaf; she learned it to communicate with her grandmother who resides in a senior housing facility. We realized that sign language is a universal language that connects people.

Based on the statistics, sign language is the fourth most spoken language in the United States, following English, Chinese, and Spanish. There are 500,000 sign language speakers, yet only 230 American Sign Language (ASL) teachers are in the US. This significant gap in the number of instructors underscores the importance of creating a solution like our app. Of course, I'll provide a mix of bullet points and paragraphs for better readability:

What it does

The app primarily aims to facilitate sign language learning in a virtual reality (VR) environment. It offers a range of immersive features and functionalities. Users can access interactive game to learn sign language, engage in gesture recognition exercises. The app is designed to cater to learners of all proficiency levels, from beginners to advanced sign language users. It also adapts to individual learning preferences, providing a personalized and engaging learning experience.

How we built it

Building Gesture VR requires a multidisciplinary approach, combining various technologies and expertise: AI, ML, VR, and API Deployment. We leveraged VR platforms like Oculus Rift, HTC Vive, and Oculus Quest to create an immersive learning environment.. Gesture recognition technology, powered by hand tracking modules from Ultraleap LMC and machine learning models, was integrated to provide real-time feedback on users' sign language gestures. A cloud-based backend infrastructure was implemented to store user progress, deliver recommendations, and facilitate live practice sessions. In addition, user-friendly interface design was a focal point to ensure accessibility for users of diverse backgrounds and ages.

Challenges we ran into

Throughout the development process, we encountered several substantial challenges:

  • Gesture Recognition Accuracy: Achieving precise and dependable gesture recognition within the VR environment posed a significant technical challenge, demanding extensive testing, refinement, and calibration. After training on 54000 Images with diverse data, 26 categories were signed by 5 subjects with 500 images for each category and person. We managed to get high accuracy.

  • Infrastructure for Training: To train machine learning models with 54000 images (8.4 GB) in such a short time, we need a high spec GPU. Utilizing Google Vertex AI Notebook and trained overnight, we managed to train the whole dataset and finetune it.

  • Content Creation: Creating high-quality demo videos, assets, animations, and interactive lessons proved to be a time-consuming task that required close collaboration with sign language experts to ensure accuracy and relevance.

  • Accessibility: Ensuring the app's accessibility for users with varying levels of physical abilities and VR experience was a top priority. Addressing comfort issues, mitigating motion sickness concerns, and enhancing overall usability were critical challenges.

  • Server Scalability: As we anticipated a potentially large user base, we needed to design a backend infrastructure capable of scaling to support concurrent live practice sessions and effectively store user progress data. Currently, we have containerized the backend and serve it as Docker and API. Making it easy for developer to deploy it in any cloud environment.

Accomplishments that we're proud of

Throughout the development journey, Gesture VR achieved several noteworthy milestones:

  • Highly Accurate Gesture Recognition: Successfully implementing a gesture recognition system with 88% F1-Score performance provided users with valuable feedback, enhancing the learning experience.

  • Engaging Content: The app's immersive learning environment, characterized by visually captivating 3D models and scenarios, succeeded in keeping users motivated and committed to their sign language learning journey.

  • Positive User Feedback: Initial user testing and feedback demonstrated the app's effectiveness in teaching sign language and addressing the shortage of sign language instructors.

  • Inclusivity: Gesture VR was designed to cater to a broad and diverse user base, taking into account various input methods, comfort settings, and accessibility features.

What we learned

Through the development of Gesture VR, we gained valuable insights:

  • Universal Appeal of Sign Language: Sign language serves as a universal language that transcends hearing abilities, underscoring its importance as a skill with broad applicability.

  • Importance of Inclusivity: Designing inclusively ensures that technology benefits all users, irrespective of their physical or cognitive abilities.

  • The Power of Gamification: Virtual reality's capacity to immerse users in a new learning environment significantly enhances engagement and effectiveness.

  • Team Collaboration: Building an app like Gesture VR requires effective collaboration between developers, educators, and sign language experts, emphasizing the value of teamwork, there are some ups and downs, a lot of pivot, but we managed to do it!

What's next for Gesture VR

In the future, we have ambitious plans to further enhance and expand Gesture VR:

  • Additional Sign Languages: Expanding support for various sign languages from around the world to increase inclusivity and reach a broader audience.

  • Open source the SDK and ML model API: Open source the ML model development, making it accessible for any developer who wants to integrate sign language integration to their ML.

  • Advanced AI Features: Continuously improving gesture recognition technology and introducing AI-driven features to provide personalized learning experiences.

  • Community Building: Fostering a vibrant learning community within the app, enabling users to connect, practice, and learn from one another.

  • Accessibility Enhancements: Ongoing efforts to improve accessibility features to cater to a broader audience.

  • Integration with Education Institutions: Collaborating with educational institutions to integrate Gesture VR into formal sign language education programs.

  • Platform Expansion: Making Gesture VR available on a wider range of VR platforms and possibly mobile devices to maximize accessibility.

Our ultimate aim remains to make sign language learning accessible, enjoyable, and effective for everyone, thereby contributing to a more inclusive society.

Quantum Realities - Gesture VR SDK

Overview

Quantum Realities' Gesture VR is a multi-platform Sign Language XR SDK designed for developers and enterprises. It utilizes camera, computer vision, and UltraLeap Hand Tracking capabilities to integrate sign language into metaverse worlds such as WebXR, VRChat, and standalone applications. Available as a Python Library, live API endpoints, and Docker container, Quantum Realities' Gesture VR SDK is easy to plug into any code environment and is highly scalable.

Links

Live API

API Documentation with UI to Test

Access our interactive API documentation and test the endpoints directly at: Gesture VR API Documentation

Detailed API Endpoints

Predicts Endpoint

  • URL: https://slvr-4zunylksjq-uc.a.run.app/predicts
  • Description: Receives an array of hand track coordinates and outputs a JSON with the predicted word.

Prep Gesture Endpoint

  • URL: https://slvr-4zunylksjq-uc.a.run.app/prep_gesture
  • Description: Receives UltraLeap Hand Tracking coordinates (XYZ array) and outputs an array of processed and scaled values.

Preprocess Endpoint

  • URL: https://slvr-4zunylksjq-uc.a.run.app/preprocess
  • Description: Preprocesses input values into standardized values for optimized ML computation.

Usage

To use these endpoints, refer to the provided API documentation. Each endpoint has specific input requirements and response formats. The API documentation includes examples and a UI for testing the endpoints interactively.

Setup

Hardware Required

  • PC/Mobile Phone with Web Browser or VR headset
  • UltraLeap Motion Leap 2

Software Dependencies

Gesture VR API

The API can be accessed in any programming language, including C, Python, JavaScript, and is easy to integrate with Unity, Unreal Engine, Blender, etc.

Docker Container

The Docker Container can be developed in any cloud service, making it scalable; AWS, Google Cloud, Kubernetes, Digital Ocean, Docker Hub, etc.

Gesture VR SDK

  • Python 3.8.6
  • Tensorflow 2.10.0
  • Docker
Share this project:

Updates