Inspiration
We wanted to make filmmaking accessible to anyone with an idea. Traditional 3D animation takes months and entire studios. I am a novelist. I write books in my free time, even piublished a few. I want to bring my thing to real life.
What it does
LUMIS is a real-time AI spatial cinema engine. You type a scene description and it instantly generates characters with backstories, secrets, fears, and desires — then drops them into a 3D world where they autonomously act, speak, move, and confront each other. You direct them like a film director: tell them to confess, fight, kneel, or feed them specific dialogue. Characters can resist your directions if it conflicts with their psychology. Features include speech bubbles, cinematic AI camera, mood lighting, a script engine, screen recording, and VR support for Pico 4 headsets.
How we built it
Three.js for real-time 3D rendering, Vite for dev tooling, OpenRouter API for AI-driven character dialogue and decision-making, Meshy API for AI-generated portraits, 3D character models, and world images. WebXR for VR headset support. Everything runs in the browser — no backend, no game engine, no downloads.
Challenges we ran into
Getting characters to feel autonomous rather than scripted — they needed to react to each other's words, proximity, and emotional state, not just follow commands. Balancing AI generation (3D models take time) with a seamless user experience led us to a pause-while-loading system that drops you into the scene immediately. Making VR work required HTTPS tunneling and WebXR-compatible animation loops. Preventing API key leakage in a client-side app was another hard lesson learned.
Accomplishments that we're proud of
Characters with real psychology — they have secrets they protect, fears that drive them away, desires that pull them forward, and they can defy the director. The speech bubble system, cinematic camera that intelligently picks shots based on scene tension, and the fact that a single text prompt generates an entire interactive 3D film scene with no assets needed.
What we learned
How to build complex real-time 3D applications purely in the browser, how to integrate multiple AI APIs (language models, image generation, 3D model generation) into a cohesive creative tool, and how WebXR works for VR deployment. Also learned the hard way about API key security in client-side applications.
What's next for 3D Filmmaker - LUMIS
Multi-scene storytelling with scene transitions, AI-generated voice acting with lip sync, multiplayer directing where multiple people control different characters, full body motion capture from webcam, and a library of shareable scenes that others can remix and re-direct. I did a different hackathon project where I created a live spatial camera to record peoples interaction in real time. I want to build off that.
Built With
- about-no-description
- css
- html
- javascript
- website
Log in or sign up for Devpost to join the conversation.