-
-
1/1 SignEcho transforms PICO 4 into a bridge for the deaf
-
Demo 1/2: SignEcho in action, recognizes "eat" American Sign Language word
-
Demo 2/2: SignEcho in action, given sign language words "mom, eat, no, orange" are being interpreted together with the conversation context.
-
2/8
-
3/8
-
4/8
-
5/8
-
6/8
-
7/8
-
8/8
Inspiration
Communication is a fundamental human right, yet not equally accessible to all. Inspired by the daily communication hurdles faced by the deaf community, I set out to create a tool that facilitates effortless dialogue between deaf and hearing individuals. The mission is to weave technology into a solution that dissolves these barriers, allowing sign language to be heard and spoken words to be seen.
What it does
SignEcho is an app for the PICO 4 VR headset that enables real-time communication between deaf and hearing users. It translates sign language into spoken words and spoken words into text. Deaf users can sign, and SignEcho will voice those signs out loud. Conversely, it will display spoken language as text in the VR space, making the conversation accessible and inclusive.
How I built it
My build process was a tapestry of state-of-the-art APIs and innovative VR technology. I used the hand-tracking capabilities of the PICO Interaction Pack to recognize sign language. Those signs are then converted to spoken language through the linguistic prowess of ChatGPT (GPT-4). To voice these translations, I integrated OpenAI's Audio API, creating a seamless audible stream from signed input.
Azure SpeechServices was the choice for converting voice to text, ensuring real-time voice recognition. Finally, I harnessed the PICO Sense Pack to display the text using Spatial Anchors in the MR environment, anchoring the transcription to the real world whiteboard or window.
Challenges I ran into
One of the main challenges was the complexity of hand pose recognition. The difference between how people express gestures and how the system recognizes them was stark. Given this, I decided to focus on 10 easily recognizable ASL words to ensure system reliability. Additionally, working with spatial APIs was a novel experience for me. Thankfully, the tutorial videos provided by PICO proved to be invaluable, helping me navigate this new terrain.
Accomplishments that I am proud of
I'm proud to have turned ideas into action, successfully implementing all the necessary features for SignEcho. This wasn't just a conceptual exercise; it was about creating something tangible and functional. Moreover, by expanding the app to support more words in the future, it has the potential to evolve into a truly useful tool for the deaf community.
What I learned
The project deepened my understanding of the importance of inclusive communication and gave me greater empathy for the challenges faced by the deaf community. On the technical front, I discovered a newfound fascination with Mixed Reality and its possibilities, which was an unexpected and enlightening aspect of this journey.
What's next for SignEcho
The potential for expansion of SignEcho beyond the hackathon is vast:
- Increasing the number of ASL words supported by the app will make it more versatile.
- Considering that many in the deaf community may not be fluent in written language, integrating an avatar that interprets transcription into sign language could greatly enhance user experience.
- Facial expressions are integral to sign language. Future integration of face tracking with more advanced headsets could address this aspect.
- Striving for real-time interpretation to facilitate smoother communication.
I am also exploring funding for my personal project being bootstrapped.
Acknowledgements
A special note of gratitude goes to my best friend, Nao, a founder of Mindland, whose initial concept of using myopotential sensors to interpret sign language sparked the journey that led to SignEcho.


Log in or sign up for Devpost to join the conversation.