Professional thermal intelligence dashboard for firefighter operations. Displays normal (RGB) and thermal camera feeds side-by-side with live AI analysis: person detection, heat zones with direction and distance, flashover risk scoring, and an AI-powered tactical advisor. Video sources are read from local disk only; no upload.
- Login — Access is gated by username/password; after login you see a sidebar of up to 7 firefighter camera feeds.
- Feed selection — Each feed is a pair of aligned videos: normal (
video N.mp4) and thermal (video N_thermal_intense.mp4). Selecting a feed loads both streams. - Dual-feed processing — For each frame pair:
- RGB feed is used for person detection (YOLO) and depth estimation (MiDaS). The RGB frame is shown unchanged on one side.
- Thermal feed is converted to grayscale, temporally smoothed, and analyzed for heat zones. Overlays (person boxes, heat zones, HUD) are drawn only on the thermal view.
- Risk engine — A rolling window of average thermal intensity is used to compute a flashover risk score (0–100) and trend (RISING / STABLE / FALLING). HIGH risk triggers an on-screen evacuation warning.
- Command Advisor — On demand, the current snapshot (victim count, risk, heat zones) is sent to a local LLM (Ollama) to generate short tactical recommendations for the Fire Captain.
| Component | Model / source | Version / how it’s used |
|---|---|---|
| Person detection & tracking | YOLOv8 (Ultralytics) | Weights: yolov8n.pt (nano). Loaded via ultralytics.YOLO("yolov8n.pt"); downloads automatically on first run. Runs on the RGB feed at 640px width with model.track(..., classes=[0], conf=0.4, iou=0.45, tracker="bytetrack.yaml") for stable, ID-consistent person tracking. |
| Depth estimation | MiDaS (Intel ISL) | MiDaS_small from torch.hub.load("intel-isl/MiDaS", "MiDaS_small") with transforms.small_transform. Used for monocular depth on the RGB frame. Depth is cached and recomputed every 5 frames. Normalized depth at each heat-zone centroid is converted to distance in feet (10–250 ft range) for heat zone labels. |
| Tactical advisor | Ollama (Gemma) | Model: gemma3:4b. Called via http://localhost:11434/api/generate (stream: false). Prompt is built from live data: victim count, risk level/score/trend, and heat zone list (direction, distance, risk, intensity). No cloud API is required; Ollama must be running locally (e.g. brew services start ollama). |
| Feature | How it works |
|---|---|
| Dual video feed | Normal (RGB) and thermal streams shown side-by-side; frames are advanced by index so they stay in sync. Both are read from a configured local directory. |
| Person detection | YOLOv8 nano on the RGB feed, class “person” only. ByteTrack keeps consistent track IDs across frames. Bounding boxes and labels are drawn on the thermal view for visibility in smoke. |
| Heat zone mapping | Thermal frame is grayscale; top ~5% hottest pixels (95th percentile) form a binary mask. Morphological close/open clean the mask; contours above a minimum area become “heat zones.” Each zone gets: direction (Left / Ahead / Right from normalized x), distance (feet, from MiDaS depth at centroid or from position/area heuristic), intensity (mean gray in zone), and risk (HIGH ≥0.80, MEDIUM ≥0.60, else LOW). |
| Flashover risk | RiskEngine keeps a rolling window of average thermal intensity (0–255). Score (0–100) = heat factor (0–60 from current intensity) + growth factor (0–40 from rate of change over the window). Levels: LOW (<40), MEDIUM (40–69), HIGH (70–100). HIGH shows “DANGER: EVACUATE IMMEDIATELY” on the HUD. Trend is RISING / STABLE / FALLING from the last 5 readings. |
| Building information | Optional Building Info button opens a dialog. Building data is fetched from ArcGIS DC Historical Buildings (2017) Feature Service (e.g. address, name, stories, purpose, materials, roof type, dimensions). Used for pre-planning and situational awareness. |
| Command Advisor | “Get Tactical Briefing” sends a snapshot (victim count, detections, risk level/score/trend, recent risk history, heat zones) to the local Ollama model (gemma3:4b). The model returns 3–5 short, actionable bullet points grounded in the data (life safety, exposure, property; no speculation). |
| Multi-camera sidebar | Up to 7 feeds (Engine 1–7 / Command). Each feed is a pair: video N.mp4 (normal) and video N_thermal_intense.mp4 (thermal). Sidebar shows a still preview per feed; clicking selects that feed for the main dual view and live analysis. |
| Insights tabs | Overview, People, Heat Zones, Risk History tabs show structured tables and a risk-score-over-time chart derived from the same pipeline data. |
| No upload | All video is read from disk; no frames are sent to external servers except optional ArcGIS for building info and optional Ollama (local). |
| Feature | Description |
|---|---|
| Dual video feed | Normal (RGB) and thermal streams shown simultaneously |
| No upload | All video files are read from a configured local directory |
| Person detection | YOLOv8 + ByteTrack on the RGB feed (victim detection with stable IDs) |
| Heat zone mapping | Top ~5% hottest pixels on thermal with direction (Left / Ahead / Right), distance (ft), and risk level |
| Flashover risk | Composite score (0–100) from heat level and growth rate; HIGH triggers evacuation warning |
| Depth-aware distance | MiDaS depth on RGB used to estimate distance in feet for heat zones |
| Command Advisor | Local Ollama (Gemma 3 4B) tactical briefing from live detection and risk data |
| Building info | DC Historical Buildings (2017) ArcGIS API for address/building details |
| Clean UI | Modern dashboard with sidebar feed selector, metrics, and tabs |
- Python 3.10+
- pip
- Ollama (optional, for Command Advisor): install from ollama.com and run
ollama pull gemma3:4b
Place aligned normal and thermal videos in the project’s video directory (default: fire fighter cam/).
Default single pair:
normal.mp4— RGB feedthermal.mp4— Thermal feed (same length, aligned by timestamp)
Multi-feed (7 cameras):
video 1.mp4…video 7.mp4— Normal feedsvideo 1_thermal_intense.mp4…video 7_thermal_intense.mp4— Thermal feeds
If your folder or filenames differ, set environment variables (see Configuration).
cd Fire
pip install -r requirements.txtYOLOv8 nano (yolov8n.pt) and MiDaS weights download automatically on first run.
streamlit run app.pyOpen the URL shown (e.g. http://localhost:8501). Log in (default credentials in app), pick a camera feed in the sidebar, and click Start stream. Both feeds run in sync; metrics and risk update from the thermal pipeline.
Optional environment variables:
| Variable | Default | Description |
|---|---|---|
FIRE_CAM_VIDEO_DIR |
./fire fighter cam |
Directory containing the video files |
FIRE_CAM_NORMAL_VIDEO |
normal.mp4 |
Filename of the RGB feed (single-pair mode) |
FIRE_CAM_THERMAL_VIDEO |
thermal.mp4 |
Filename of the thermal feed (single-pair mode) |
Example (Unix/macOS):
export FIRE_CAM_VIDEO_DIR="/path/to/your/videos"
streamlit run app.pyA sample .env.example is included; copy to .env and load it with direnv or python-dotenv if you use one.
Fire/
├── app.py # Streamlit dashboard (login, sidebar feeds, dual view, metrics, Building Info, Command Advisor)
├── config.py # Video paths and validation (get_video_paths, validate_video_sources)
├── video_processor.py # YOLOv8 + ByteTrack, MiDaS depth, heat zones, dual-feed processing, HUD
├── risk_engine.py # Flashover risk score and trend from thermal intensity history
├── llm_advisor.py # Ollama (Gemma 3 4B) tactical briefing from cached result and risk
├── utils.py # Resize, thermal effect, drawing, direction/distance helpers
├── requirements.txt # Python dependencies (streamlit, ultralytics, torch, opencv, etc.)
├── .env.example # Optional env template
└── README.md # This file
pip install -r requirements.txt
streamlit run app.pystreamlit run app.py --server.port 8501 --server.address 0.0.0.0Example unit (adjust paths and user):
[Unit]
Description=Prometheus Thermal Intelligence Dashboard
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/prometheus
Environment="FIRE_CAM_VIDEO_DIR=/var/lib/prometheus/videos"
ExecStart=/usr/bin/streamlit run app.py --server.port 8501 --server.address 0.0.0.0
Restart=on-failure
[Install]
WantedBy=multi-user.targetExample Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV FIRE_CAM_VIDEO_DIR=/data
VOLUME /data
EXPOSE 8501
CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]Build and run (mount the folder with your video pairs):
docker build -t prometheus .
docker run -p 8501:8501 -v "/path/to/videos:/data" prometheus- Sync: RGB and thermal are advanced by frame index; use the same length and frame rate for correct alignment.
- Thermal input: Heat zones and flashover risk use the thermal feed (grayscale). Person detection and depth use the RGB feed; overlays are drawn on the thermal view.
- Risk levels: LOW (<40), MEDIUM (40–69), HIGH (70–100). HIGH triggers the on-screen evacuation warning.
- Command Advisor: Requires Ollama running with
gemma3:4b. If Ollama is not available, the briefing button shows a connection error.
Built for BisonHacks. MIT License.