Inspiration

When it comes to road accidents, every second matters. Yet today, most accidents are only reported when someone physically calls 911 — wasting precious minutes that could save lives. We realized cities already have hundreds of public traffic cameras, but they were acting only as passive observers. We thought: what if we could turn existing city cameras into real-time accident responders? That's how Sentinel was born — a system to detect crashes, document them, and notify authorities automatically.

What it does

Our system continuously monitors hundreds of live traffic cameras using machine learning models. When a crash is detected:

A Gemini AI model generates a structured accident report summarizing the event.

The incident is instantly saved to MongoDB, along with key metadata.

A Twilio voice call is automatically triggered, relaying accident details to emergency services — all without any human intervention.

Everything happens within seconds, providing faster detection, faster reporting, and faster emergency response.

How we built it

Data Collection: We first scraped live public traffic camera metadata from the Québec 511 network, collecting camera IDs, latitude/longitude coordinates, and video feed links. This database became the foundation for our monitoring system.

Live Camera System: We built a Next.js frontend that dynamically loads and manages hundreds of camera streams using lightweight, efficient techniques. The app clusters cameras based on geographic regions like Montreal, Laval, and Sherbrooke using Leaflet.js and MarkerCluster.

Machine Learning Detection: Using TensorFlow’s COCO-SSD object detection model, we performed real-time frame-by-frame vehicle recognition (cars, trucks, buses, motorcycles). Collisions were detected mathematically by analyzing bounding box overlaps — if two vehicles' bounding boxes overlapped significantly, an accident was flagged.

Incident Summarization: Once an accident was flagged, we captured multiple frames and sent them to Gemini AI’s multimodal API. We engineered a structured JSON prompt so Gemini would only output clean, parsable data describing the crash: how many vehicles, types of vehicles, and a short summary.

Database Integration: Each detected incident was stored in MongoDB Atlas with timestamp, camera location, and generated report, making it easy to retrieve or analyze later.

Emergency Response Automation: After saving an accident, our system automatically triggered a Twilio programmable voice call. The system dynamically generates a clear spoken message summarizing the crash and its severity to be delivered to emergency services.

Live Monitoring UI: Users can explore cameras, open live feeds, and trigger real-time analysis manually. We also implemented an area filter system so users can easily browse different city regions.

Challenges we ran into

Low Video Quality: Public camera feeds are often blurry, laggy, and inconsistent, making accurate object detection difficult.

Balancing Performance: Managing hundreds of video streams while keeping the web app responsive was a huge technical challenge.

API Reliability: Handling Gemini AI API rate limits and occasional failures gracefully without crashing the detection flow.

Minimizing False Positives: Fine-tuning the collision logic so minor overlaps (like cars stopped at a red light) didn't trigger false accident reports.

Cross-System Coordination: Ensuring MongoDB, Twilio, and AI generation stayed synchronized even under high-speed incident detection.

Accomplishments that we're proud of

End-to-End Automation: Built a full-stack system that goes from crash detection → incident generation → emergency calling within seconds.

Scalable Mapping System: Engineered an interactive map capable of managing and displaying hundreds of camera streams cleanly by city area.

Real-World Impact Potential: Our project proved that existing city infrastructure could be upgraded into a life-saving system without requiring any new hardware.

Resilient System Design: The app automatically retries on API failures, ensuring robustness even in real-world unstable environments.

What we learned

Practical ML Application: Learned how to deploy real-time machine learning on messy, unpredictable real-world video data.

Efficient Full-Stack Design: Built a fast, scalable Next.js frontend integrated with external APIs and databases — all optimized for real-time performance.

API Handling at Scale: Understood how to architect multi-API systems (AI + DB + Telephony) and gracefully handle edge cases, timeouts, and error recoveries.

Real-Time Critical Systems: Learned how critical it is to build fast, reliable systems when dealing with emergency scenarios.

What's next for Sentintel

Pedestrian and Cyclist Accident Detection: Extend the machine learning model to detect not just vehicle crashes, but also incidents involving pedestrians or cyclists.

Accident Severity Classification: Use AI to classify collisions as "minor", "moderate", or "major" based on impact patterns and vehicle damage analysis.

Public Transparency: Launch a real-time public dashboard where citizens can view live city traffic conditions and active incident alerts.

City Partnerships: Work towards pilot projects with municipalities to integrate Sentintel into real-world emergency response workflows.

Built With

Share this project:

Updates