WE ARE IN PERSON!

Inspiration

Deaf or HOH people are more likely to die in fires because they are less likely to know when the smoke alarm is going off. People with disabilities experience the world in a completely different way from the able-bodied members of society. The potential to improve digital accessibility with new and evolving technology is massive, and with the accessibility gap getting wider, we need to use it for good.

What it does

Echo is an Augmented Reality software application designed to visually inform hard-of-hearing people of sounds in their day-to-day lives. It enhances everyday experiences through live captioning and real-time alerts for important sounds like alarms, sirens, and voices in distress. Echo utilizes AI to identify the type of sound - emergency alert, direct conversation, distress sounds like a baby crying, nearby conversation, etc. With our tutorial, users will be able to recognize what each symbol means and navigate the interface seamlessly. To make conversations even more accessible, Echo automatically identifies the voice of the speaker if they have interacted with the user multiple times, saving their photo to use in place of the more generic “person speaking” icon. Lastly, the user is able to adjust the appearance of text captions as well as the types of sounds to be alerted to, and can save these settings as presets for various situations or scenarios which would require a specific “hearing” profile.

How we built it

UX Research Process

Echo is a software for hard-of-hearing users to move through life with the same audio cues as a hearing person.

User research surveys: To assess what sounds are considered relevant enough to focus on, we surveyed people without hearing impairments and asked them to think back on their time in common situations, like at a party, on a busy street, and in a lecture. By ranking the type of sounds they focused on, we could determine the sounds important enough to inform an HOH person about. We also played them two clips, one in a serene park and the other in Times Square, asking them to list the sounds they focused on.

Affinity Mapping: We took all the survey responses and consolidated them into an affinity map to categorize the types of sounds. This is how we figured out the most relevant sounds that people pay attention to, namely emergency alerts like sirens and alarms, and conversation.

User Interview: We were able to interview a hard-of-hearing person to ask them about their experiences and how they would benefit from such software. They gave us insights that we had overlooked, such as indicating when a person behind them is talking or real-time captioning in situations where even their hearing aids failed them.

Competitive Analysis: We looked at similar products that were already on the market and assessed the features Echo offers that they don’t. We found that no competitor came close to the types of features and user-friendliness we have envisioned for Echo.

Sketching and Wireframing

After analyzing our survey results and interview answers, we brainstormed screens for what our software could look like. This process took quite some time since imagining an AR design was more complex than anticipated. Once we had solidified what we wanted our product to focus on with our low-fi sketches, we created mid-fi screens on Figma to create a clearer vision. Throughout, we created iterations of the same screens to ensure that each one was intuitive and accessible to align with our mission. We played with different colors and fonts to figure out what works best.

Building UI elements

We knew from our user interview that the line between informative and distracting is very fine, and we wanted to make sure our UI elements called attention to important sounds without distracting the user from their everyday life. We settled upon universal, recognizable icons in a semitransparent finish - easily understood by glancing out of the corner of one’s eye. To add in a dynamic movement to the UI (letting the user know that the sound didn’t just happen once, but is ongoing!) we made the icons pulse in place. The framing of icons was also important to us because users need to know where the sounds are coming from. This is especially important if the sound comes from something they cannot see, like a person standing behind or an approaching car. For these sounds, we created a rectangular frame that could be used like a metaphorical map - sounds coming from the user’s left-hand side will appear on the left side of their frame, etc. For sounds coming from visible sources, like a baby that the user is looking directly at, a circular “forcefield” appears around the sound source to add extra context.

Challenges we ran into

Prototyping was challenging due to our unfamiliarity with using videos on Figma. While it took a significant amount of time to figure out, we opted for this option to fully showcase our project despite encountering a huge learning curve for this skill. Another challenge we ran into was time management. Rather than starting our UX Research on Friday night, we started on Saturday, which caused delays. Before we could start wireframing, we had to wait for survey answers to come in and finish our interview with a HOH individual to ensure that our product’s solution is grounded in real user needs and experience. This caused delays that put pressure on the later stages of our development.

Accomplishments that we're proud of

We’re really proud of our idea and the UX Research process we engaged in to really flesh it out and make sure it would be useful to people. We are also proud of the animations and dynamic movements that appear in our final app, as they are the result of hours of troubleshooting and problem solving using obscure workarounds to overcome the limits of Figma. We believe Echo could be monumental in the way HOH people interact with the world, and would be a great tool to keep them safe and informed, especially in emergency situations.

What we learned

In our research, we learned so much more about the problems faced by the HOH community. Our interview was extremely insightful because we realized the challenges extend beyond just missing spoken words – they impact safety and quality of life. We also learned a ton of technical skills like working with Figma and prototyping, and made difficult elements like videos and overlays work successfully in the end. Ctrl+Shift+R is a godsend!

What's next for Echo

Most existing solutions only focus on direct communication issues, when the environmental awareness that provides context and safety on the daily is overlooked. Echo handles this problem right now, but we would like to have integrations for many different types of sounds along with different indicators for varying levels of danger.

We plan to integrate a more complex AI that learns and adapts to each user's specific environment and needs over time, becoming more accurate in distinguishing important sounds from background noise. We also want to explore partnerships with emergency service providers to enable direct alert systems during critical situations, and also expand compatibility with as many AR glasses manufacturers to make Echo accessible to as many people as possible.

Built With

Share this project:

Updates