Here's the link to our video demo: https://drive.google.com/drive/folders/15g27bYYofGZkDCVtmbDxK6YGNslBqah3?usp=sharing
Inspiration
We surveyed students here at KCL before building a single line of code. The results were stark, 80% study alone most or all of the time, 70% have wished someone was there, and average satisfaction with current study tools sits at 3.6 out of 5. The majority already use AI in their workflow. Something is still missing.
The missing thing isn't smarter answers. It's presence.
We also knew from our own experience one of us runs a YouTube channel teaching design and engineering to nearly 3,000 subscribers and also tutoring maths to A-level students that the single biggest factor in whether someone understands a concept is whether they've had to explain it out loud.
Not read it. Not watch it. Teach it. That's the Feynman Technique, and 100% of our surveyed students already do it instinctively without realising it.
Nobody had built a tool around that insight. So we built My Stardust.
What it does
My Stardust is a personalised AI study companion that is cloned from your voice and your face. It starts as a blank slate, it knows nothing, and learns everything you teach it. Then it tests you on what you taught it.
The core loop is three stages:
You Teach. The student explains their lecture notes out loud to their Stardust. The ElevenLabs conversational AI agent — running with the student's own cloned voice listens and calls add_knowledge(), storing everything the student teaches it. Explaining forces active recall and synthesis, which is how real understanding is built.
Stardust Tests You. When the student says "quiz me on programming," the agent calls question_user_recall(topic="programming"). The app retrieves everything stored about that topic, sends it to Gemini with a prompt to generate a challenging question, and the agent speaks it back in the student's own voice. When the student answers, check_user_recall_answer() fires — Gemini compares the answer against the knowledge base and returns a verdict. Correct or incorrect, the 3D Stardust character reacts visually.
Understanding Becomes Visible. When the student needs to see a concept, they can ask for a whiteboard explanation. The AI generates a live visual diagram on an interactive, draggable, resizable canvas overlay. When they want to go deeper, they can ask for it to become a game — the AI generates an interactive mini-experience directly in the canvas. "Could you explain the software development lifecycle on a whiteboard? Actually, could you make it interactive like a game?" — that's a real prompt we demoed.
How we built it
ElevenLabs — Voice Cloning & Conversational Agent (MLH ElevenLabs Track)
ElevenLabs is the heart of the experience. During onboarding, the user records a voice sample. We use the ElevenLabs voice cloning API to generate a custom voice model tied to the user's profile. We then use the Agent Duplicate API to create a personal conversational AI agent that speaks in that cloned voice throughout all interactions.
The agent is configured with four client-side tools: add_knowledge, retrieve_knowledge, question_user_recall, and check_user_recall_answer. These are called by the ElevenLabs agent at runtime and handled by our Electron app, which orchestrates the full pipeline between ElevenLabs and Gemini.
The system prompt instructs the agent to behave as a warm, familiar companion never fabricating facts, always calling retrieve_knowledge before answering personal questions, and celebrating correct answers while gently correcting wrong ones.
The 3D Stardust character's particle vibration and glow intensity are mapped directly to the ElevenLabs audio amplitude in real time, so the hologram pulses when Stardust speaks.
Google Gemini Cloud Reasoning Layer (MLH Google AI Studio Track)
Gemini powers all the reasoning that requires more than retrieval. Specifically:
Quiz generation: When question_user_recall fires, retrieved knowledge is sent to Gemini with the prompt: "Based on these notes, generate one challenging short-answer question." Gemini returns the question, which the ElevenLabs agent speaks.
Answer evaluation: When check_user_recall_answer fires, both the student's answer and the correct retrieved knowledge are sent to Gemini for comparison.
It returns either "Correct answer: [explanation]" or "Incorrect answer: [explanation]" — which the agent reads back in the student's own voice.
Visual and interactive generation: The generate_visual and generate_code_exercise tools use Gemini to generate whiteboard diagrams and interactive learning experiences on demand. The AI is aware that generative experiences take a moment to build — it says "once the experience is ready" rather than exposing any technical process to the user.
Gemini's long context window holds the full session state, meaning nothing resets mid-conversation.
Three.js — The 3D Stardust Character The Stardust character is a 3D point-cloud generated from the user's headshot during onboarding. We use morph target animations to transition between states — idle, thinking, speaking, excited (correct answer), confused (wrong answer). The gear morph works by extracting vertex positions from a base model to create a new gearPositions morph target array in useMemo, which allows smooth transitions between character states. The character reacts to every moment of the interaction. When the student gets a quiz question right, particles burst outward. Wrong answer, the form collapses inward. The visual state system makes the companion feel alive.
Electron — Desktop App Shell My Stardust is a transparent desktop app — glass morphism panels, frosted overlays, deep navy/black backgrounds. The whiteboard canvas is a draggable, resizable overlay built as a React component that floats above the Stardust character. Every UI element is designed to feel like it exists in deep space.
Challenges we ran into
Getting the four-tool pipeline to feel seamless was the hardest engineering problem. ElevenLabs calls a tool, the Electron app catches it, sends to Gemini, gets a response, returns it to ElevenLabs all while the agent is speaking in real time. Latency at any step breaks the feeling of a live conversation. We had to be surgical about what goes to the cloud and what stays local.
The 3D character morph system was also non-trivial. Mapping audio amplitude to particle behaviour in real time while also running morph transitions required careful animation loop management to avoid jank.
Onboarding making voice capture and face capture feel ceremonial rather than clinical — took more design iteration than we expected. The photo dissolving into particles to form the Stardust character on first run is the moment we're most proud of.
What we learned
The Feynman Technique is the right insight to build on. Every student we spoke to already does this talking to themselves, explaining concepts to walls, recording voice notes. We just gave them someone to explain it to. The difference between talking to yourself and talking to a version of yourself that listens, remembers, and talks back is enormous.
We also learned that the voice clone is not a gimmick. Hearing a version of your own voice explain something back to you creates a psychological familiarity that makes the feedback land differently. It's hard to explain until you experience it.
What's next for My Stardust
My Stardust is just getting started. The vision is a companion that grows with you across your entire degree remembering everything you've ever taught it, knowing which concepts you struggled with in first year, and showing up differently every session because it genuinely knows you.
On the feature side: multiplayer study sessions where two Stardusts can interact, letting friends quiz each other through their own cloned companions. Deeper gamification streaks, mastery levels, a visible knowledge graph that shows how your understanding has grown over time. More generative experience types beyond whiteboards and mini-games. And a mobile version so your Stardust is with you everywhere, not just at your desk.
The emotional north star stays the same no student should have to face the night before an exam alone.
Log in or sign up for Devpost to join the conversation.