Inspiration

We were inspired by a simple problem: we save thousands of memories digitally, but we rarely experience them. Photos sit in folders, notes get lost in files, and meaningful moments become static data. We wanted to rethink digital storage entirely — not as a filing system, but as a place. The idea behind Mnemosyne came from the belief that memory is spatial and emotional, not just chronological. So we asked: what if memories could be explored like a museum instead of browsed like files?

What it does

Mnemosyne is an AI-powered web app that turns uploaded photos, text, and audio into a 3D museum you can walk through. It analyzes your media, understands themes and emotions, groups related memories together, and generates a virtual gallery where each memory becomes an exhibit. Instead of scrolling through files, users explore their memories in first person like they’re inside a real museum.

How we built it

We built Mnemosyne as a full-stack web application. The frontend uses Next.js, React, and TypeScript, while the 3D environment is rendered using React Three Fiber. We styled the interface with TailwindCSS and added smooth transitions using Framer Motion. On the backend, we used Next.js API routes to handle uploads and process files.

The system pipeline works like this: uploaded files are analyzed, turned into structured data, grouped into related clusters, and then used to generate a 3D scene layout. We designed the layout engine so it places exhibits automatically and builds the environment procedurally. The user can then explore the generated museum in real time using first-person controls.

Challenges we ran into

One of the biggest challenges was performance. Rendering a 3D environment in the browser can be heavy, so we had to optimize geometry, reuse materials, and carefully design lighting so it looked immersive without slowing down.

Another challenge was making the system reliable even without external AI access. We built a fallback analysis system so the app still works even if no API key is provided.

We also found it difficult to translate abstract concepts like emotion and meaning into physical spatial layouts. Designing rules that turn data into architecture required a lot of experimentation.

Accomplishments that we're proud of

We’re proud that Mnemosyne is fully functional from end to end. Users can upload files, generate a scene, and explore it immediately. We successfully combined AI analysis, procedural generation, and 3D rendering into one seamless experience that runs entirely in the browser.

We’re especially proud that we built a system that feels interactive and alive rather than static, and that we created something that looks visually polished while still being technically complex.

What we learned

We learned that interfaces become much more engaging when data is experienced spatially instead of just displayed. We also learned how to optimize real-time 3D graphics, structure AI pipelines, and design systems that remain stable even when external services fail. Most importantly, we learned how to turn a conceptual idea into a working product under hackathon time pressure.

What's next for Mnemosyne, Memory Made Spatial

Next, we want to expand Mnemosyne into a multi-room museum where each cluster becomes its own gallery. We also plan to add saved sessions, shareable museum links, richer analysis for audio and text, and more adaptive environments that change as users explore.

Our long-term goal is to create a new way of interacting with memories not just storing them, but experiencing them as places you can walk through.

Built With

Share this project:

Updates