Inspiration

Deaf and hard-of-hearing users often rely on captions, which are slow, text-heavy, and mentally demanding in live settings. We wanted to explore whether speech could be turned into fast, intuitive visual signs instead, something closer to how people naturally process information.

What it does

Speech-to-Sign Aid (STS) converts live speech audio or auto-generated captions into real-time visual sign-based representations. It detects keywords and phrases, maps them to a limited sign and symbol vocabulary, and displays synchronized visual output for use in classrooms, meetings, and presentations.

How we built it

The system processes incoming speech through an automated speech recognition or caption-ingestion layer. The text is normalized and passed into a lightweight rule-based mapping system that links recognized words and phrases to predefined animated signs or visual symbols. The front end renders these visuals in real time with a focus on clarity, synchronization, and accessibility.

Challenges we ran into

Full sign-language translation is extremely complex and slow, so we had to constrain the linguistic scope while still preserving meaning. We also had to balance low latency with reliability and design fallback behaviors when unsupported words appeared.

Accomplishments that we're proud of

We built a working real-time prototype that produces meaningful visual output from live speech with low latency. Even with a limited vocabulary, the system demonstrates that speech-to-visual signing is technically feasible and usable in real environments.

What we learned

We learned that accessibility tools don’t need full linguistic coverage to be useful. Speed, clarity, and consistency matter more for real-time communication. We also learned how to design systems that degrade gracefully when perfect translation isn’t possible.

What's next for STS

We plan to expand the vocabulary, improve contextual understanding, and move toward more advanced sign-language synthesis. This MVP serves as a scalable foundation for richer models, broader language coverage, and deeper user testing.

Built With

Share this project:

Updates