Inspiration

We were inspired by how powerful communication is. In a world where everything is fighting for attention, being able to express yourself clearly is not just convenience. it’s access, confidence, and independence.

We kept thinking about hearing impaired people who use sign language in everyday situations: a café, a campus office, a bus stop, or a concert. Too often, the world expects them to adjust, instead of meeting them halfway. That gap pushed us to build SignMeFy as a bridge between sign language and spoken communication without always needing a person to translate.

What it does

SignMeFy is a wearable and android app that bridges communication for hearing impaired people who use sign language to communicate daily. It helps its users communicate to non-signing communities in both everyday conversations and loud public spaces where accessibility is usually missing.

How we built it

The wearable concept is our “front door” for communication: the signer signs to the camera, the system interprets it, and speech is produced so the other person understands instantly.

The app supports the second major use case: in loud or chaotic environments like concerts or events, the app captures surrounding audio and converts it into live text , and can also display a visual sign output so the user stays included.

We built our prototype by mapping the key user journeys, creating clean onboarding, and designing simple modes: Sign → text → Speech and Live Captions. We focused on usability: big buttons, minimal steps, and fast switching between modes.

Challenges we ran into

Actually building the code has been hard because the features don’t connect easily. Making sign → text → speech and sound → text → sign work smoothly is complicated, and some tools cost money. We also struggled with linking the UI screens to the real functions so everything works together as one app.

Accomplishments that we're proud of

We’re proud that we worked together as a team and actually brought SignMeFy to life in some way as an experience. We finished the full UI flow from onboarding to core modes, and we designed it with real accessibility in mind: big clear buttons, simple navigation, quick phrases, and privacy-first controls. We’re also proud of how we thought through real-life situations like noisy events and low-visibility environments, and built a fallback approach so the experience doesn’t break when conditions aren’t perfect.

What we learned

We learned that building accessibility tech isnt as easy as we thought, it requires a lot more thought and validating if each idea actually works for its intended user. We also learnt what it takes to make different parts work together smoothly like making sign, speech, and text translate one-to-one.

Finally, we learned that good UI is only half the work, connecting the UI to real functionality is where most of the complexity happens.

What's next for SignMefy

Next, we want to move from a digital prototype into a more functional build by testing with hearing impaired users and sign language communities to validate our flows, language, and comfort.

We also want to improve the translation accuracy by expanding sign support, improving low-light performance, and adding stronger “hands in frame” guidance. For the live caption mode, we plan to add smarter keyword alerts (like emergency or important announcements), offline options, and better personalization (preferred phrases, voice tone, and language settings).

Our long-term goal is to make SignMeFy reliable enough to support everyday moments like in school, at work, services, and public events, so communication feels natural and accessible anywhere.

Built With

Share this project:

Updates