SMAZ Rover
The product that made fire extinguishers obsolete
The problem
Residential fires kill thousands annually, and the critical window for intervention is measured in minutes—not hours.
- Small fires escalate exponentially: a wastebasket fire becomes room-engulfing in under 3 minutes.
- Traditional smoke detectors only alert; they don't act.
- Fire departments face response time constraints, especially in rural areas or multi-story buildings.
- Confined spaces (basements, storage rooms, equipment closets) are high-risk but difficult for humans to access quickly.
Our solution: autonomous first response before the fire department arrives.
Imagine a small rover stationed in your home that activates when smoke is detected. It navigates to the fire source, confirms visual detection of flames, and deploys suppression—buying critical minutes for evacuation and professional response.
In a real deployment, this would integrate with existing smart home infrastructure (smoke detectors, security cameras) and could coordinate with building management systems in commercial spaces.
Why a robot (proof-of-concept)
Current fire suppression is binary: either you have expensive installed systems (sprinklers, which cause water damage and require professional installation), or you grab a handheld extinguisher and hope you catch it early.
We built FireGuard Rover as a proof-of-concept that exercises the complete autonomy loop:
visual detection → navigate to source → mechanical actuation → suppress fire.
The system demonstrates:
- Real-time computer vision running on embedded hardware
- Closed-loop control integrating perception and actuation
- A practical suppression mechanism that works with standard fire extinguishers
Same concept as industrial fire-fighting robots, smaller footprint, consumer-accessible components.
How we built it
We built FireGuard Rover as a full-stack autonomous system spanning computer vision, embedded control, and mechanical actuation.
1) Fire detection: YOLOv4-tiny on Raspberry Pi 4
The detection pipeline lives on the RPi 4 and uses a custom-trained YOLOv4-tiny model.
Training details:
- Dataset: Kaggle fire detection dataset (350 images used for training/validation split after filtering low-quality samples)
- Architecture: YOLOv4-tiny (faster inference, suitable for embedded deployment)
- Training framework: Darknet with transfer learning from COCO pre-trained weights
- Input resolution: 416×416 (balance between detection accuracy and inference speed)
Why YOLOv4-tiny? Full YOLO models are too heavy for real-time inference on RPi 4. YOLOv4-tiny sacrifices some accuracy for 3-4× faster inference, which is critical when the robot needs to react within seconds.
Inference pipeline:
- Capture from RPi Camera Module 2.1 via
picamera2library (1080p30 capable, we run at 640×480 for processing speed) - Preprocessing: resize to 416×416, normalize to [0,1], convert BGR→RGB
- YOLOv4-tiny inference via OpenCV DNN module (
cv2.dnn.readNetFromDarknet) - Post-filtering:
- confidence threshold: 0.65 (balance false positives vs. missed detections)
- NMS (non-max suppression) to eliminate duplicate detections
- color validation: check that detection region contains flame-like hues (HSV thresholding for orange/yellow/red)
- Bounding box extraction: centroid used for navigation heading
2) Inter-processor communication: RPi → Arduino
The RPi handles perception and decision-making; the Arduino handles real-time motor control and servo actuation.
Communication protocol: GPIO trigger line: RPi GPIO pin → Arduino digital input
- HIGH = activate suppression sequence
- LOW = standby
Why split compute?
- Arduino provides deterministic real-time control (no OS jitter)
- RPi handles computationally expensive CV workload
- Separation of concerns: perception vs. actuation
3) Locomotion: tank-style differential drive
The rover uses a 4-wheel drive chassis with tank-style control (no steering—differential speed creates rotation).
Motor driver setup:
- L298N dual H-bridge: each side controls 2 wheels in parallel
- Channel A (ENA, IN1, IN2): left wheels
- Channel B (ENB, IN3, IN4): right wheels
- Speed control: PWM on ENA/ENB pins (0-255 duty cycle)
- Direction control: digital HIGH/LOW on IN1-IN4
Movement primitives (hardcoded in Arduino):
void moveForward() {
digitalWrite(IN1, HIGH); digitalWrite(IN2, LOW); // left wheels fwd
digitalWrite(IN3, HIGH); digitalWrite(IN4, LOW); // right wheels fwd
analogWrite(ENA, 180); // 70% speed
analogWrite(ENB, 180);
}
void rotateLeft() {
digitalWrite(IN1, LOW); digitalWrite(IN2, HIGH); // left wheels reverse
digitalWrite(IN3, HIGH); digitalWrite(IN4, LOW); // right wheels forward
analogWrite(ENA, 150);
analogWrite(ENB, 150);
}
Open-loop control trade-offs:
- ✅ Simple, fast to implement
- ✅ No encoder wiring complexity
- ❌ No feedback → drift over time
- ❌ Speed variations between motors not compensated
For a hackathon demo navigating short distances, open-loop was sufficient. Production would add wheel encoders + closed-loop speed control.
4) Fire suppression mechanism: servo-actuated extinguisher
Instead of custom pneumatics or solenoid valves, we used a mechanical pressing mechanism with high-torque servos.
Hardware:
- 2× TD-8125MG servos (25 kg·cm torque, metal gears, waterproof)
- Servo 1: presses down on aerosol extinguisher nozzle
- Servo 2: rotates the camera/sprayer assembly (pan motion for targeting)
- Extinguisher: standard pressurized fire extinguisher can (mounted rigidly to chassis)
- Mounting: 3D-printed bracket holds servo arm aligned with nozzle center
Actuation sequence (hardcoded in Arduino):
void suppressFire() {
// Step 1: align sprayer with detected flame (servo 2)
rotateServo2.write(targetAngle); // computed from bbox centroid
delay(800); // allow settling time
// Step 2: press extinguisher nozzle (servo 1)
pressServo1.write(pressedPosition); // ~90° from neutral
delay(5000); // spray for 5 seconds
// Step 3: release nozzle
pressServo1.write(neutralPosition);
}
Why servos instead of solenoids?
- Cheaper (no custom valve assembly)
- Easier mechanical integration (no pressure fittings)
- Sufficient force: 25 kg·cm easily overcomes aerosol spring resistance
- PWM control (50 Hz, 500-2500μs) is standard Arduino servo library
Critical design consideration: The servo arm must be rigidly mounted and aligned with the nozzle's axis of travel. Any lateral force will stall the servo or snap the arm. We used a reinforced 3D-printed clamp to ensure the force vector was purely vertical.
5) State machine: autonomous behavior
The rover operates in 3 states:
SEARCH mode:
- Rotate in place (slow tank turn)
- CV pipeline scans for fire continuously
- Transition: fire detected → APPROACH
APPROACH mode:
- Compute heading error:
error = (bbox_center_x - frame_center_x) / frame_width - If
|error| > threshold: rotate to center fire in frame - Else: drive forward toward fire
- Transition: bbox area exceeds threshold (fire is close) → SUPPRESS
SUPPRESS mode:
- Stop movement
- Trigger GPIO → Arduino activates servo sequence
- Spray for 5 seconds
- Return to SEARCH
This simple FSM (finite state machine) keeps the logic deterministic and testable.
6) Power distribution
A mobile robot running motors + servos + compute is power-hungry.
Power architecture:
- 2S LiPo (7.4V, 2200mAh) → L298N (motors)
- 5V BEC (buck converter) → Raspberry Pi 4
- 6V regulated rail → Arduino Uno + servos (shared supply)
Why separate rails?
- RPi 4 is sensitive to voltage sag (motors cause brown-outs)
- Servos draw high instantaneous current during actuation
- L298N has onboard 5V regulator, but insufficient current for RPi + servos
We used a dedicated 5A buck converter for the RPi and a separate 3A regulator for the Arduino + servo rail.
Challenges we ran into
- Real-time CV on embedded hardware: even YOLOv4-tiny is computationally expensive. We had to drop resolution and optimize OpenCV's DNN backend (using OpenCV's default backend, not TensorFlow/PyTorch). Frame rate was inconsistent (varies 5-12 FPS depending on scene complexity).
- Servo torque vs. extinguisher spring force: initial testing used cheaper SG90 servos (1.8 kg·cm), which couldn't overcome the aerosol nozzle's spring. Upgrading to 25 kg·cm servos solved this but added weight and power draw.
- Mechanical alignment: the servo arm had to press exactly on the nozzle center. Off-axis force caused binding. We iterated through 3 bracket designs before getting reliable actuation.
- False positives in fire detection: the model occasionally triggered on bright orange objects (safety vests, LED lights). Adding HSV color filtering reduced this significantly.
- Power brownouts: initial wiring caused the RPi to reboot when servos actuated. Separate power rails + bulk capacitors on the servo supply fixed this.
- Tank-style steering drift: without encoders, the rover drifted during "drive straight" commands due to motor speed mismatch. We manually tuned PWM values to get close, but it's not perfect.
Accomplishments that we're proud of
- We built a robot that actually suppresses fire. Not a simulation—real flames, real extinguisher, real autonomous actuation.
- Custom-trained YOLO model running inference on embedded hardware in real-time.
- Mechanical actuation from scratch: designing a servo mechanism that reliably presses an aerosol can is harder than it sounds.
- Full autonomy loop: perception → decision → action, with no human in the loop once activated.
- Cross-platform integration: RPi (Linux) + Arduino (bare metal) + OpenCV (C++) + Python control logic all working together.
In all seriousness though, we're very proud of what we made. Our fully autonomous rover successfully detects fires, navigates toward them, and deploys suppression—all without human intervention. We also created a system that could theoretically be deployed in residential or commercial spaces, which under no exaggeration, literally saves lives.
What we learned
We came into this competition with limited practical knowledge of computer vision deployment, real-time embedded control, and electromechanical integration. We walked out with hands-on experience in:
- YOLO training and deployment (not just inference, but actually training a custom model)
- Embedded computer vision optimization (balancing accuracy vs. latency on constrained hardware)
- Power systems design (voltage regulation, current distribution, brownout debugging)
- Mechanical actuation (servo selection, force analysis, mounting design)
- State machine design for autonomous robotics
- Inter-processor communication (serial protocols, GPIO signaling)
Beyond technical skills, we learned the importance of hardware-software co-design: you can't optimize code without understanding the mechanical constraints, and you can't design mechanisms without knowing the control loop latency.
What's next for FireGuard Rover
The sky's the limit! Well, not really—unless we start putting fire suppression robots in aircraft. But FireGuard is a practical and scalable solution for early fire intervention.
Near-term improvements:
- Thermal camera integration (FLIR Lepton or MLX90640) for reliable detection even in smoke-filled rooms
- Encoder feedback for precise odometry and straight-line tracking
- Obstacle avoidance (ultrasonic or ToF sensors) so the rover doesn't get stuck en route to the fire
- Wireless telemetry (live video feed + status updates to a mobile app)
- Larger extinguisher capacity (current prototype uses a small aerosol can; production would use a CO₂ or dry chemical canister)
Long-term vision:
- Multi-robot coordination: deploy multiple rovers in large buildings (each covers a zone)
- Smart home integration: trigger from existing smoke detectors or IoT sensors
- Reinforcement learning: train the rover to optimize suppression strategy (spray angle, duration, approach path)
- Commercial deployment: warehouses, server rooms, industrial facilities where downtime from sprinkler activation is costly
Built With
- arduino
- c++
- l298n
- opencv
- python
- raspberry-pi
- yolov4
Log in or sign up for Devpost to join the conversation.