Inspiration

The 2025 Palisades Fire tore through 23,000 acres of Los Angeles. Over 1,000 personnel scrambled to respond — coordinating via radios, flat maps, and whiteboards. The Incident Command System hasn't changed since the 1970s. The US Forest Service flew 17,000+ drone missions in 2024, AI can predict fire spread with terrifying accuracy, satellite imagery refreshes in near-real-time — but the command layer is still a folding table with a paper map.

We're from the Bay Area. We've watched smoke fill our skies. We asked: what if an incident commander could stand inside the fire zone? Not staring at a screen — standing in a photorealistic 3D world, with AI agents ready to execute on voice command.

That's FireSight.

What it does

FireSight is a spatial VR command center for wildfire response built for PICO headsets. An incident commander puts on the headset and is immersed in a 3D terrain of the fire zone — built from NASA satellite imagery rendered through Three.js and Google Photorealistic 3D Tiles.

Four AI agents orbit as spatial panels:

  • Pyro — predicts fire spread using Rothermel physics (wind, slope, fuel type) and paints animated overlays directly onto the terrain: red (now), orange (30 min), yellow (1 hour)
  • Swarm — coordinates 24 reconnaissance drones, optimizing coverage and streaming thermal feeds
  • Evac — calculates evacuation routes in real-time, marking roads green (clear) or red (blocked) as fire advances
  • Deploy — positions ground units (engines, hotshot crews, helicopters, water tenders) and recommends resource allocation

The commander speaks: "Pyro, project 25 mph northwest wind." The fire overlay shifts across the terrain. "Swarm, dispatch drones to Sunset Ridge." Drone icons scatter. "Evac, evacuate Zone 3." Green routes light up on the world model.

A timeline scrubber lets you slide from now to +3 hours and watch fire consume a neighborhood — proving the simulation is dynamic and alive, not a static screenshot.

Beyond voice, the system integrates with OpenClaw via Telegram — an incident commander can issue 40+ tactical commands from a phone ("go defensive," "backfire authorized," "evacuate zone 2") and watch the 3D world update in real-time via server-sent events. The entire 45-person ICS hierarchy responds autonomously.

How we built it

3D World: We used NASA satellite imagery and Google Photorealistic 3D Tiles to build an explorable 3D terrain of the Palisades fire zone. The drone-view gives a Minecraft-style fly camera over real topography — double-click anywhere to ignite a fire and watch it spread. All fire rendering is pure Three.js: animated texture overlays painted directly on the terrain mesh with raycasting for interaction.

Fire Simulation: We implemented a cellular automata engine based on the Rothermel (1972) model — the same physics the US Forest Service uses. It runs on a 256x256 grid covering the real Palisades geography, accounting for wind speed/direction, terrain slope/aspect, five fuel types (chaparral, grass, timber, urban, rock), ember spotting probability, and retardant suppression effects.

45-Agent ICS Engine: We built a full simulation of the real Incident Command System — from the Incident Commander down to individual engine crews, helicopters, and hand crews. The system escalates through six phases (Patrol $\rightarrow$ Detection $\rightarrow$ Initial Attack $\rightarrow$ Extended Attack $\rightarrow$ Crisis $\rightarrow$ Full ICS), activating new roles at each level. Five AI agents (Overwatch, Predict, Swarm, Evac, Deploy) propose decisions, and the commander approves or overrides through a spatial decision queue.

Strategy System: Eight tactical dimensions (posture, attack mode, firing authority, structure protection, evacuation zones, air priority, drone mode, safety stops) propagate in real-time across every view. Change from "offensive" to "defensive" and watch units reposition instantly across the 3D terrain.

Command API + OpenClaw: An Express backend parses natural language commands with fuzzy matching, handles safety-critical confirmations (backfire authorization, mass evacuation), and broadcasts strategy changes via SSE to all connected clients. OpenClaw connects this to Telegram, so a field commander can issue orders from their phone.

Voice-First Interaction: Web Speech API for continuous listening, Google Gemini 2.5 Flash for natural language understanding, with fallback regex parsing. No controllers — just voice and gaze.

Stack: React, Three.js, Google 3D Tiles, Cesium, NASA satellite data, Express.js, OpenClaw, Gemini, ElevenLabs, WebSpatial SDK.

Challenges we ran into

  • 3D Tiles + fire overlays: Rendering Google Photorealistic 3D Tiles AND animated Three.js fire spread simultaneously required careful layer management — raycasting against a separate collider mesh while the fire shader paints on the terrain surface.
  • Real-time strategy propagation: When the IC changes posture from "offensive" to "defensive," that change must flow from the Telegram bot $\rightarrow$ Express server $\rightarrow$ SSE broadcast $\rightarrow$ React state $\rightarrow$ Three.js terrain $\rightarrow$ iframe drone view. Getting this pipeline reliable under 500ms was hard.
  • ICS protocol complexity: Modeling a real 45-agent incident command hierarchy is hard. We studied actual USFS incident response protocols — the 10 Standard Fire Orders, LCES safety protocol, division/branch structure — to get escalation phases and decision points right.
  • Scope discipline: Four AI agents, a physics simulation, voice control, a Telegram bot, a 3D drone view, a 2D map, an ICS org chart, a strategy panel, and a command API — in 35 hours. We had to ruthlessly prioritize. The fire spread loop came first. Everything else was layered after the core worked.
  • Voice reliability: Hackathon floors are loud. We added fallback regex-based command parsing and the Telegram/OpenClaw path as an alternative input channel.

Accomplishments that we're proud of

  • The world model IS the interface. Every agent annotates the terrain directly — fire overlays, drone positions, evacuation routes, unit icons all render ON the 3D world. This is spatial computing done right, not a 2D dashboard ported to VR.
  • 45 autonomous agents running a real ICS protocol — with message routing, escalation phases, and human-in-the-loop decision points. This isn't a mockup. It's a simulation of how wildfire response actually works.
  • Multi-modal command: Voice in VR, Telegram from the field, click on the map. Three ways to issue the same 40+ tactical commands, all converging on a single strategy state that updates every view in real-time.
  • The interaction loop works: speak a command $\rightarrow$ agent responds $\rightarrow$ world updates $\rightarrow$ you react. Under 3 seconds. It feels like commanding a real operation.
  • We built something that could genuinely save lives. The US spends \$3B+/year on wildfire suppression with command tools from the 1970s.

What we learned

  • World models aren't just about generating pretty scenes — they're about creating shared spatial context that multiple AI agents can read from and write to simultaneously.
  • Voice-first interaction in VR is dramatically more natural than controllers. When you're standing inside a 3D world, speaking to AI agents feels like commanding a real team.
  • The Incident Command System is a masterpiece of organizational design — and it's begging for a spatial, AI-native upgrade.
  • OpenClaw + Telegram turned out to be a killer combo for field-to-command communication. A battalion chief doesn't need a VR headset — they need to text "go defensive on Division Alpha" from a phone and have it just work.

What's next for FireSight

  • Live data feeds: Real-time NASA FIRMS hotspot detection, NOAA weather, and drone telemetry streaming into the world model
  • Multi-user: Multiple commanders and field officers sharing the same spatial world
  • Training mode: Replay historical fires (Camp Fire, Dixie Fire, Palisades) to train incident commanders in VR
  • Beyond wildfire: The spatial agentic command pattern applies to any complex coordination — hurricane response, search and rescue, urban disaster management

Built With

Share this project:

Updates