Inspiration
Many students study by repeatedly rereading their notes, but this is often a passive and inefficient way to learn. Research in cognitive science shows that people remember information better when it is associated with physical spaces and visual experiences rather than just text.
With The Mind Museum, we wanted to reimagine learning as something spatial, interactive, and memorable. We were particularly inspired by the method of loci (memory palace technique), a mnemonic strategy dating back thousands of years that works by placing pieces of information along a mental path or within a familiar location so that recalling the space helps you recall the information. Ancient Greek and Roman scholars used this technique to memorize large amounts of knowledge.
However, building and navigating a memory palace usually requires significant mental effort and imagination, which makes it difficult for most learners to use consistently. We wondered: what if technology could build the memory palace for you?
What it does
Instead of asking students to visualize their own memory palace, we automatically transform their notes into a virtual museum filled with interactive AI Agents and 3D artifacts. By turning information into a space you can walk through, we aim to make learning more immersive, intuitive, and memorable.
In a world where AI can generate text instantly, we wanted to explore how AI could instead reshape how we experience knowledge itself—not just as words on a page, but as environments we can explore.
Users can enter a topic or upload notes, and the system extracts the key ideas and concepts. These ideas are then converted into 3D models and placed throughout a 3D environment. Users can explore the museum, click on artifacts to learn about concepts, ask the AI receptionist in-context questions, and test their knowledge through quizzes from fellow museum visitors. Instead of rereading static notes, learners explore their knowledge as a physical space, making studying more engaging and easier to remember.
How we built it
Text processing and knowledge extraction
Users enter a topic and upload their notes as a PDF. On the backend, we extract the document text using pdfplumber and PyPDF2. The content is then embedded using Sentence Transformers (all-MiniLM-L6-v2) and stored in ChromaDB, which allows us to perform semantic search across the user’s notes.
We use large language models to structure the information. Google Gemini (gemini-2.5-flash) generates the overall museum layout and designs the artifacts that represent key concepts. For question answering and quizzes, we use an OpenAI-compatible client connected to a HuggingFace endpoint running gpt-oss-120b, which performs retrieval-augmented generation (RAG) over the stored embeddings.
Knowledge structuring
Once the important ideas are extracted, the system organizes them into museum exhibits. Each exhibit represents a concept, topic, or relationship from the notes. These concepts are linked together semantically so that related ideas appear in nearby sections of the museum.
This step transforms raw text into a structured knowledge graph that can be mapped to spatial locations.
3D environment
The interactive museum is rendered in the browser using Three.js with React Three Fiber and Drei. This allows us to create a navigable 3D environment where exhibits appear as objects in the world.
Users can walk through the museum using pointer lock controls, interact with artifacts, and explore different rooms of knowledge. The environment uses GLTF/GLB models for the museum itself and FBX character models with Mixamo animations for NPC visitors.
Interactive learning experience
The frontend is built with Next.js (React) using file-based routing and client components. Tailwind CSS provides styling, while Framer Motion powers smooth UI animations and transitions.
As the museum is generated, the backend streams artifact creation progress to the frontend using Server-Sent Events (SSE) so users can watch the world populate in real time.
Infrastructure
The system is containerized using Docker Compose, which orchestrates the frontend, backend, and supporting services. This setup made it easier to manage the AI services, vector database, and web application during development.
Challenges we ran into
One of our biggest challenges was translating abstract information into meaningful visual exhibits. Most study material exists purely as text, but a museum requires physical artifacts and visual displays. There isn’t an obvious mapping between something like “gradient descent” or “cellular respiration” and a 3D object you can place in a room. To address this, we designed a system where the language model first extracts the core concepts and relationships from the notes, then generates descriptions of artifacts that symbolically represent those ideas. We also needed to ensure the artifacts were understandable and not just decorative, so each exhibit had to provide interactive explanations and contextual information.
We also faced challenges connecting AI-generated content with a real-time 3D environment. Artifact generation, world design, and quiz creation all happen on the backend, but the frontend needs to render them dynamically as the museum is built. To make this feel responsive, we implemented Server-Sent Events (SSE) to stream artifact generation progress to the client so users can see the museum populate in real time. Coordinating these asynchronous processes while keeping the user experience smooth required careful communication between the frontend and backend systems.
Finally, building a fully interactive 3D web experience introduced its own difficulties. Navigation, pointer lock controls, object interaction, and NPC dialogue all had to work seamlessly inside the browser. Balancing performance with visual richness was important so that the environment remained immersive while still running smoothly on typical laptops.
Accomplishments that we're proud of
We're proud of the end-to-end pipeline we created in 36 hours. Starting from a user’s topic or uploaded PDF, our system extracts information, generates structured concepts, designs artifacts, and places them inside a 3D world that users can immediately explore. Seeing a static document evolve into a dynamic museum filled with exhibits felt like a powerful demonstration of how AI can reshape the way we interact with knowledge.
We’re also proud of how interactive and engaging the experience feels. Users aren’t just passively consuming information—they can walk through the museum, interact with exhibits, ask the AI receptionist in-context questions, and test their understanding through quizzes generated from their own notes. This transforms studying into something closer to exploration and discovery, where learning feels like uncovering ideas in a museum rather than preparing for an exam.
Finally, we’re proud that we successfully combined multiple technologies—AI models, vector search, and real-time 3D rendering—into a single cohesive experience. Integrating language models, semantic search, and a browser-based 3D environment is technically challenging, but bringing these pieces together allowed us to create something that feels both innovative and practical. The result is a prototype that demonstrates how AI and spatial computing can work together to create entirely new ways of learning.
What we learned
Through building The Mind Museum, we learned a lot about how different technologies and design ideas come together to create an interactive learning experience.
One major takeaway was learning how to build 3D experiences for the web. Using Three.js with React Three Fiber and Drei, we explored how to render interactive environments directly in the browser. We learned how to manage scene composition, camera movement, pointer lock controls, and object interactions while keeping performance smooth enough to run on a typical laptop. Designing a 3D environment also required thinking about user movement, spatial layout, and discoverability, which is very different from designing a traditional web interface.
We also gained experience designing AI pipelines that interact with real-time user interfaces. Instead of simply generating text responses, our system had to transform AI outputs into structured data that could drive the creation of a 3D world. This meant coordinating multiple components: extracting information from PDFs, generating embeddings, retrieving relevant context with vector search, and using language models to generate artifacts, explanations, and quizzes. We learned how to structure prompts, validate outputs, and design backend APIs that allow AI-generated content to integrate smoothly with the frontend.
What's next for The Mind Museum
The current version of The Mind Museum demonstrates how study material can be transformed into an interactive space, but we see many opportunities to expand the idea further.
One direction we’re excited about is creating more sophisticated and meaningful 3D exhibits. Right now, artifacts represent concepts symbolically, but future versions could generate dynamic visualizations and simulations that illustrate how ideas work. For example, a physics concept could appear as an interactive experiment, a biology topic could be represented by animated cellular processes, and a computer science concept could appear as a live algorithm visualization. By combining AI with richer 3D graphics, exhibits could evolve from static displays into interactive demonstrations of knowledge.
Another important direction is deeper integration with educational materials. We plan to support a wider range of inputs, including lecture slides, textbooks, and online resources. Instead of manually uploading notes, students could import entire courses and automatically generate museums representing the structure of a subject. This could turn The Mind Museum into a visual map of a course, where each room represents a different topic and exhibits represent key ideas.
Finally, we hope to use more capable models and stronger compute resources to make artifact generation faster and more seamless, so the museum can be built in near real time.

Log in or sign up for Devpost to join the conversation.