Inspiration

Phonetically was born from a simple realization: for the auditory-impaired, the challenge of speech isn't about the ability to make sound, but the lack of an auditory monitor. We wanted to create a tool that replaces the ear with the eye, turning invisible sound waves into high-speed visual data that helps build muscle memory and vocal confidence.

What it does

Phonetically is an AI-powered speech coach that provides a visual feedback loop for those who cannot hear their own pitch or volume.

AI Reference Voices: Uses ElevenLabs to generate crystal-clear, "perfect" pronunciations for users to study.

Real-time Visualization: Translates vocal input into dynamic waveforms and frequency bars, allowing users to "see" their voice in real-time.

Clarity Scoring: Uses the Web Speech API to provide an instant clarity percentage, giving users a tangible metric for improvement.

How we built it

We prioritized a "Neo-Vibe" aesthetic to make speech therapy feel like a modern digital interaction rather than a clinical chore.

Framework: Built with Next.js and TypeScript for a fast, reliable structure.

Styling: Used Tailwind CSS for a "glassmorphic" UI—translucent cards over deep cosmic gradients.

Voice Engine: Integrated the ElevenLabs API to provide high-fidelity audio references.

Deployment: Hosted on Vercel with a continuous integration pipeline from GitHub.

Challenges we ran into

Building this was a masterclass in "hidden rules." We struggled with the Next.js "use client" directive, as we had to separate our interactive React hooks (like useState) from the server-side architecture. We also faced the classic "404" hurdle, learning that Vercel is a strict "chef" that requires a perfect package.json "recipe" and a specific app/ directory structure to serve the site.

Accomplishments that we're proud of

We are incredibly proud of the integration between ElevenLabs and the Web Speech API. Creating a seamless flow where a user can listen to an AI-generated voice and immediately see their own attempt visualized on the same screen felt like a breakthrough in accessible technology.

What we learned

We learned that accessibility is more than just a feature; it’s an architecture. We discovered how crucial it is to handle API keys securely using Environment Variables on Vercel. Most importantly, we learned that even a small amount of visual feedback can drastically change how a user perceives their own progress.

What's next for Phonetically

The journey doesn't stop at visualization. Our roadmap includes:

Haptic Feedback: Integrating phone vibrations that pulse in sync with the user's volume.

Progress Analytics: A dashboard to track "Clarity Gains" over weeks and months.

Expanded Voice Library: Giving users more ElevenLabs voice options to practice different accents and tones.

Hardware Track

Inspiration:

Inspired by the idea of autonomous robots that have smart decision making to ensure smart and optimized processes.

What it does:

Autonomously follows a track, picks up and transports items, and makes smart and efficient decisions to ensure that obstacles are avoided.

How We Built It;

Used Arduino kit with various sensors including color, IR, ultrasonic.

Challenges we ran into:

Difficulty calibrating IR sensors and ensures smart decision making to choose the best pathway.

Accomplishments We're Proud Of:

Autonomously moving quickly and able to track colors.

What we learned:

Sensor calibration, time management and project management.

What's next:

Optimizing for speed and efficiency, more testing to ensure consistency.

Built With

Share this project:

Updates