Inspiration
When we came to HarvardHacks, we didn’t just want to build something impressive, we wanted to build something that mattered. During the welcome presentation, we heard about Project Prakash, an initiative that helps restore sight to children born with treatable blindness and studies how their brains learn to see for the first time. That story hit us hard. It reminded us that technology can do more than automate, it can give people back parts of their world.
That’s when we decided to focus on accessibility, to create something that would make digital experiences more inclusive for people who are blind, neurodivergent, or missing limbs.
What it does
SONORA is a voice-first e-commerce platform that lets anyone shop online just by speaking naturally. It’s built for people with visual, motor, or cognitive disabilities, but it feels magical for everyone.
Users can simply say things like “Find me sneakers under $80” or “Add the red jacket to my cart,” and SONORA, powered by Google Gemini and ElevenLabs, understands, responds, and completes the purchase, completely hands-free. With Stripe for payments and real product data, SONORA turns online shopping into a natural, conversational experience.
How we built it
We built SONORA with Next.js and TypeScript for a clean, scalable base. Google Gemini handles conversation and intent understanding. ElevenLabs brings natural text-to-speech responses. Web Speech API powers real-time speech recognition. Stripe manages secure, accessible checkout.
We followed WCAG 2.1 accessibility guidelines and used React Context for global cart management to make the entire experience fast, stable, and inclusive.
Challenges we ran into
Making voice interaction feel human was our hardest challenge. We had to sync speech recognition, AI processing, and text-to-speech without awkward pauses or misfires. Managing conversation context, like remembering what’s in the cart, was tricky too, especially in a voice-only interface.
We built a callback system to make the dialogue flow smoothly and created an action marker system that extracts shopping intents from natural speech without breaking conversation flow.
Accomplishments that we're proud of
Our proudest moment was seeing SONORA work, hearing “add that to my cart” and watching it happen, no clicks needed. We realized this could genuinely change lives for millions who find online shopping inaccessible.
What we learned
We learned how powerful AI can be when systems truly work together. Gemini understands intent, ElevenLabs speaks like a human, and Web Speech listens clearly, even through accents.
Most importantly, we learned that building for accessibility first makes the experience better for everyone.
What's next for Sonora
We want to grow SONORA into a global voice-commerce platform that connects users to any store or service, Amazon, local shops, restaurants, through natural conversation.
Next, we’ll Add multi-language support Partner with accessibility organizations for real-world testing Expand into multimodal AI including voice, gestures, and vision
Our goal is simple: make SONORA the accessibility layer for the digital economy, so everyone, regardless of ability, can shop, browse, and live online without barriers.
Built With
- elevenlabs
- gemini
- next.js
- openai
- react
- stripe
- tailwind
- typescript

Log in or sign up for Devpost to join the conversation.