Inspiration
Language doesn’t vanish. It sneaks away.
First it’s the kid answering in English.
Then it’s the parent simplifying sentences.
Then it’s the grandparent nodding on FaceTime, smiling, not understanding.
We noticed something unfair: kids are expected to learn new languages fast, while somehow holding onto old ones with no help. And the tools we give them are… flashcards. Apps. Tiny cartoon owls yelling about streaks.
None of that teaches you how to talk.
So we asked a questionable but important question: What if the thing helping your kid learn a language was fuzzy and lived on their bed?
What it does
LINGOBUNNS is a stuffed animal that speaks back. Politely. Patiently. In multiple languages.
Kids talk to it. That’s it. No buttons. No scores. No “incorrect answer” noises. The bunny listens, replies, tells stories, and slowly starts using more complex language as the kid keeps up.
Parents do the boring but important stuff on a website: pick the language, set safety rules, control volume so the bunny doesn’t start philosophizing at 2 a.m.
It works for learning a new language, keeping a family language alive, or both. It’s Duolingo, but it doesn’t guilt-trip you.
How we built it
LINGOBUNNS is built as a voice-first system with a clear separation between the child-facing hardware layer and the parent-facing software layer.
The physical device is powered by Arduino, which handles BunBun’s head movements while it speaks. Audio captured from the device is sent to the backend for processing.
On the backend, Python manages the audio pipeline and request flow. Incoming speech is transcribed and passed to Gemini, which generates context-aware responses based on the selected language, difficulty level, and prior interaction history. The system dynamically adjusts vocabulary, sentence structure, and response length based on observed speaking patterns rather than fixed levels.
Generated responses are converted to speech using ElevenLabs, allowing us to deliver natural, expressive voice output that sounds consistent and engaging for children.
The parent-facing dashboard is built with React and JavaScript, providing an interface for configuring language settings, content restrictions, schedules, and volume limits. Parents never interact with the device directly; all controls are handled through the web interface.
Firebase is used for authentication, data storage, and syncing parent settings with the backend. It stores configuration data, interaction metadata, and progress indicators while maintaining secure access control.
Figma was used throughout development to prototype and iterate on the parent dashboard and overall system flow before implementation.
Challenges we ran into
Everything. Genuinely everything.
We were learning the entire stack at the same time, from APIs and databases to hardware and voice systems, which meant most problems came with a side quest of “figure out what this even is first.”
The physical build was its own adventure. Creating the hardware meant cutting open a very cute stuffed animal, fitting electronics inside it, and hoping we could put it back together without permanently traumatizing the bunny. Balancing hardware constraints with software expectations was harder than expected, especially when something that worked in code didn’t behave the same way in the real world.
In short, every part of LingoBunns was a challenge, but learning how to push through that chaos was also what made the project possible.
Accomplishments that we're proud of
The biggest accomplishment is that it actually works. LingoBunns successfully handles voice input, generates responses, and speaks back, all while staying screen-free for the child and configurable for parents. We also managed to integrate hardware, AI services, and a web dashboard into one system, and yes, it is genuinely very cute, which helped us realize how important design and emotional appeal are in educational tools.
What we learned
We learned pretty much everything along the way. This project forced us to work with APIs, GitHub collaboration, ElevenLabs, MongoDB, Pyfirmata, and Copilot in a real, not-theory way. We learned how to connect frontend, backend, and hardware into one pipeline, how to debug when nothing works, and even how to physically control components like a servomotor. More than anything, we learned that building something end-to-end teaches you faster than any individual tutorial ever could.
What's next for LingoBunns
Next, we want to make LingoBunns better at listening and smarter about responding. That means improving speech recognition for younger and multilingual voices, expanding language support, and adding more culturally specific stories and conversations. We also want to test LingoBunns with real families to see how it fits into daily routines, not just demos.
On the technical side, we plan to refine the hardware for better audio quality, more reliable interactions, and a sturdier physical build. We’ll also continue improving the parent dashboard so progress insights feel informative rather than like homework.
Beyond the product itself, we’re exploring an aesthetic-oriented marketing approach. Because LingoBunns is something children bond with physically, design matters as much as functionality. We’re interested in potential partnerships with brands known for emotional design and collectability, such as Jellycat, to position LingoBunns as both a learning tool and a cherished companion.
Log in or sign up for Devpost to join the conversation.