Inspiration

We noticed that the modern web, while connecting us, often feels flat and ephemeral. Digital culture and personal memories are trapped in endless 2D feeds, lacking the permanence and spatial context of the real world. This inspired us to create Place.fun: a platform to move beyond the scroll and build a living, 3D map of our collective story. We wanted to create a space where memories aren't just posted but are inhabited, allowing people to build a shared world from the ground up.

What it does

Place.fun is a collaborative, explorable 3D world where culture gets an address. It empowers users to:

  • Materialize Memories: Drop photos or text prompts—from childhood icons to today's obsessions—and watch them generate unique, walkable 3D rooms.
  • Build a Shared World: Each room connects to a larger, shared cultural map, creating a vast, user-built metaverse of interconnected experiences.
  • Interact and Co-Create: The world is social and interactive. Users can build neighborhoods together, leave artifacts in each other's spaces, remix rooms, and host live moments that become a permanent part of the terrain.
  • Create a Living Museum: The result is a living archive of our collective story—preserving traditions, inventing new aesthetics, and giving creators the tools to shape how culture is experienced.

How we built it

We built Place.fun as a shared, explorable 3D world on the web. The frontend is Next.js with React Three Fiber for real‑time rendering, a minimap for navigation, and dynamic chunk preloading for smooth movement. Users drop photos or prompts; a FastAPI service wraps ZoeDepth to estimate depth and reconstruct meshes, exporting GLB files. Our Next.js API orchestrates uploads, stores assets on S3, and indexes chunk metadata in MongoDB so spaces slot into a global grid. We optimized for instant load (no SSR for Three.js), asset prefetching, and consistent rotations/scale so scenes stitch cleanly. The result: a collaborative map where your memories become walkable rooms—and everyone’s contributions connect into a living cultural landscape. Gemini is our creative engine:

  • It first reconstructs user photos into panoramas so depth/mesh generation has maximal context for cleaner, more continuous rooms.
  • When users prompt, Gemini can synthesize a fresh image or directly author a scene brief (themes, layout, lighting, props) that we translate into a coherent, walkable space.

Challenges we ran into

  • Enabling the ZoeDepth model to run on MPS instead of CUDA and other general optimizations
  • Ensuring consistent scale/origin so generated rooms align in a shared world.
  • Keeping generation interactive: prompting without blocking exploration.

Accomplishments that we're proud of

  • A single, collaborative world where everyone’s rooms connect seamlessly.
  • Fast & smooth navigation.
  • A robust depth pipeline (ZoeDepth → mesh → GLB) that works for several different prompt formats.
  • Gemini-assisted prompt expansion that turns them into themed spaces.

What we learned

  • Users want to keep exploring while generation runs—non-blocking UX matters.
  • Small, meaningful inputs (nostalgia, artifacts) create powerful cultural presence at scale.

Built With

Share this project:

Updates