Inspiration
Everyone has a vision for their home. Most people never get to see it before it's too late to change. You spend weeks scrolling through furniture, guessing at paint colours, arguing about whether the couch will fit — and then you move in and realise you got it wrong. The room feels smaller than you imagined. The WiFi doesn't reach the bedroom. The south-facing wall gets brutal afternoon heat that no one accounted for. This isn't a design problem. It's an information problem. The tools to solve it exist, architects use them every day, but they're locked behind $10,000 software licenses and years of training. We built Matterport because we believe everyone deserves to see their home before they live in it. Not a mood board. Not a sketch. A real, walkable, analyzable 3D space where you can drop in actual furniture, test real materials, and let AI tell you things about your home that even most architects don't think to check — before a single decision is final.
What it does
Matterport turns your floor plan into a digital twin: A living, breathing 3D model of your home, in minutes, in your browser, for free. Upload a photo of your floor plan, and we instantly generate a navigable 3D space. Walk into any room, rotate the view, and start making it yours. Change wall colours, swap floor materials, and watch the space transform in real time. Found a sofa you love on Amazon? Drop the link in and see it placed directly in your room, to scale, exactly where you'd put it. No guessing whether it fits. No imagining the colour against your walls. Just see it. Then hand it over to our AI. Describe what you want (For example, a warm living room, better acoustics in the home office, stronger WiFi coverage throughout), and it runs a deep analysis across your entire floor plan. It maps heat distribution across every room, models WiFi signal strength based on your layout and wall placement, and scores the acoustic profile of each space. All visualised directly on your 3D model with a colour-coded overlay. It's the full picture of your home, before you commit to any of it.
How we built it
We started with Lovable as our primary development environment and pushed it well beyond its typical use case. We used it to architect and iterate on a real-time 3D spatial engine, using its AI-assisted coding to rapidly prototype and debug Three.js geometry, procedural texture generation, and camera systems that would have taken days to hand-write. The 3D engine itself is built on Three.js. We built a procedural texture system from scratch inside Lovable (wood grain, concrete, marble, brick, plaster, teak), each generated via canvas algorithms, iterated on in real time using Lovable's instant preview and AI pair-programming to tune the visual output until it looked right. The furniture placement system works by taking an Amazon product URL, hitting our Vercel backend to scrape the product image and metadata, parsing dimensions from the title when the API doesn't return them directly, and rendering the product as a billboard in 3D space with pixel-level white-background removal handled on the canvas. The AI layer is where things got interesting. We built on top of K2V2 — but not as a chatbot. We treat K2V2 as a spatial reasoning engine. When a user describes their vision, we serialise the entire floor plan into a structured spatial context: every room's dimensions, wall materials, adjacency relationships, and orientation. K2V2 reasons across this graph and returns scored simulation outputs for thermal comfort, WiFi signal propagation, and acoustic absorption, per room, per wall. Those scores drive the colour-coded overlays that render directly onto the live 3D model. The model isn't answering questions. It's running physics-informed analysis on a spatial document it's never seen before, in real time, with no fine-tuning. The frontend is built in React and TypeScript with TanStack Router for file-based routing and Tailwind CSS v4 for styling.
Challenges we ran into
The hardest problem was the 3D renderer. Getting Three.js to generate a believable, navigable floor plan from structured room data — with correct wall adjacency, proper lighting, and textured surfaces — required far more iteration than expected. Early versions looked like a grid of floating colored boxes.
Getting K2V2 to behave as a spatial reasoning engine rather than a conversational assistant was genuinely non-trivial. The model doesn't natively understand floor plans — we had to design a serialisation format that encodes room geometry, material properties, and adjacency relationships in a way the model could reason over and return structured, per-room scored outputs.
On the frontend, TypeScript type conflicts between shared context and component-level types caused a cascade of errors that took hours to untangle.
Accomplishments that we're proud of
We're proud that we could actually build a functioning navigable 3D home, generated from a floor plan, with real furniture dropped in from Amazon and AI-powered spatial analysis — all running in a browser tab with no install, no signup, and no specialised hardware. We're proud of the procedural texture engine. Every material is generated algorithmically at runtime. No external assets, no texture packs. Just math and canvas. We're proud of how we used K2V2. Turning a language model into a room-by-room physics simulator — scoring heat, WiFi, and acoustics from a serialised spatial document — is not what these models are typically used for. It worked better than we expected. And honestly, we're proud of the speed. We built this entire product in a hackathon window, using Lovable to move faster than any of us could have alone.
What we learned
The biggest lesson was how AI has transformed building products with creativity (there's really no limit to what your imagination can create). We learnt that prompt engineering is system design. Getting K2V2 to return structured, per-room simulation scores wasn't about writing better instructions — it was about designing the right data format, the right context schema, and the right output contract. We also learned that 3D in the browser is hard in ways you don't expect. The geometry is the easy part. Lighting, shadows, material blending, camera feel — that's where the hours go. And we learned that Lovable is genuinely powerful not just for UI scaffolding but for iterating on complex rendering logic faster than any traditional workflow would allow.
What's next for Matterport
technical side: better floor plan parsing with actual computer vision, more accurate furniture scaling, real-time multiplayer collaboration so architects and clients can walk a space together, and deeper simulation fidelity for the thermal, WiFi, and acoustic models.
product side: we want Matterport to become the Figma for interior designing—a social ecosystem where people publish their room designs, follow designers they admire, save looks they love, and build a living portfolio of their spaces. Every home tells a story. We want to give people a place to tell it.
Built With
- amazon-product-api
- claude
- k2
- lovable
- node.js
- react
- tailwind
- tanstack
- three.js
- typescript
- v2
- vercel
- vite

Log in or sign up for Devpost to join the conversation.