Inspiration

We were inspired by the self-tutored approach, modeled after the Feynman technique, which is well-researched (e.g., Eliciting self-explanations improves understanding by Chi et al., 1994). This approach enhances students' knowledge and problem-solving abilities across various subjects. We also incorporated AI into it, since it'll nudge us sensibly, making the Feynman learning process faster. We also thought this would be an excellent application for the mixed reality modality, as the context is provided in real life through textbooks, and near-immediate think-aloud might be ideal, rather than typing, to ensure the student has mastered a concept/topic, while also providing a seamless, isolated MR interface just for learning purposes only, reducing any distractions.

What it does

Explain a topic as if you're explaining it to the rabbit, who doesn't know the topics too well. The rabbit tries to nudge you to ask more profound questions if you've used jargon, complex words, or leaps in logic, as if to understand more deeply.

Example:

If the User says: "A molecule is atoms bonded together." Rabbit's Reply: "Oh dear, 'bonded'? Like with glue? What does 'bonded' mean? I don't understand!"

This line of reasoning provides self-reliant feedback to the user, allowing them to engage with the surrounding material and return to the rabbit once they've understood and can explain the concept in simple terms. This process feedback loop continues until the student has achieved mastery of the topic and is ready to proceed with the next one.

How we built it

We built the experience using Unity via the Meta XR SDK, with a modified OpenAI-Unity wrapper utilizing an OpenRouter endpoint to retrieve any model (we chose Gemini 2.5 Flash for its faster and more reliable performance, although any model could be used). We implemented a sleek, subtle UI on top of the rabbit, allowing the user to engage with it. The user can use hand gestures, such as a wave to make the rabbit listen, a thumbs-up to send the message, or a thumbs-down not to send it. The UI moves with the rabbit, which is grabbable, and can be placed on a table (if the spatial setup has already been done in the room). Finally, we used Meshy to generate a rabbit inspired by Stanford XR's mascot.

Challenges we ran into

We attempted to implement a screenshot feature and send it to an LLM, but we were unable to complete the latter in time; however, we successfully implemented the former, it's not usable in the final build.

Accomplishments that we're proud of

We somehow got an MVP working within 24 hours!

Built With

Share this project:

Updates