Inspiration 🌟
Body cameras capture countless hours of critical footage, yet most of it is only reviewed after something goes wrong. We asked, “what if that intelligence could help in the moment, before risk escalates?” COPilot was born from the belief that real-time, responsible AI can protect both officers and the public. By continuously reading the scene, forecasting risky situations, and delivering calm, actionable audio guidance, COPilot helps de-escalate force, prevent mistakes, and keep both first responders and the public safer. Beyond the field, our live data-analytics dashboard gives command centers an operating that has a real-time view of emerging threats, risk alerts, and the active body-cam livestream, supporting faster decisions and coordinated responses. It also creates a clear, auditable record that strengthens accountability and helps establish innocence in disputed incidents, reducing harm, and building trust, so risk is predicted, harm is prevented, and trust is protected.”
What it does 🔧
COPilot is the real-time data analytics Operating System for police forces that also serves as the copilot for cops. COPilot watches body-cam video in real time, predicts risky situations and potential excessive force before they escalate, and delivers calm, actionable audio guidance—protecting both officers and the public. Complementing the on-scene copilot, our Real-Time Data Analytics Operating System for police forces gives command centers a live, unified operational picture: streaming body-cam feeds, risk alerts, and incident context as they unfold, so supervisors can coordinate faster and smarter.
What Makes COPilot Special?
- Risk detection voice agent: proactive risk detection, de-escalation, and real-time update voice agent.
- The operating system for command: the dashboard acts as a real-time OS—risk containing real-time body camera feeds, unit status, incident timelines, risk alerts, and cross-camera stitching, enabling coordinated decisions and resource deployment in seconds.
- Accountability logging: A tamper-evident audit trail—supporting prevention and rapid exoneration in disputed incidents.
Key Features:
- Real-time body-cam analysis with early risk/threat prediction
- On-device audio coaching for de-escalation cues
- Command OS dashboard with live risk feed, secure livestream, geofenced alerts, officer vitals/telemetry, supervisor escalations, and one-click incident bookmarks
- After-action review with searchable timelines, synchronized video, and tamper-evident logs
How we built it 🛠️
Our system is a realtime safety co‑pilot made of two production‑ready apps plus a set of utilities. Each piece is independently deployable but designed to work together in one LiveKit room and one Supabase project.
1) Officer & Command Dashboard (Next.js + Supabase)
- Next.js 15 + TypeScript + App Router + Tailwind + shadcn/ui
- Realtime dashboard for incidents, alerts, stats, map, and mobile officer view
- Supabase CDC for live updates without polling
- Auth and session via Supabase; protected routes and middleware
- Data model: officers, incidents, alerts, incident clips, officer notes with risk level and escalation type enums
- Live feeds unify the agent logs and camera activity into a single operational view
2) Voice/Video Safety Agent (Python, LiveKit Agents, OpenAI VLM)
- LiveKit Agents worker joins the room as an autonomous participant
- OpenAI Realtime model (VLM/LLM/TTS) orchestrated by Agents SDK
- Vision/audio context: latest video frames and speech guide the model
- Purpose‑built system prompt generates strict JSON (score/label/threats/message/summary)
- Thresholding + cooldown to avoid alert spam; English‑only short directives
- High‑risk events are logged to Supabase (incidents + alerts) for the dashboard
- Packaged with uv for reproducible Python envs; Dockerfile for cloud or edge deployment
3) Web Voice Agent Frontend (Next.js + LiveKit Components)
- Browser app that mints connection details and hosts a voice session
- LiveKit Components for media tiles, device selection, chat/transcripts
- Token service creates room/participant for one‑click demos
- Validation of end‑to‑end agent behavior without extra hardware
4) DevOps & Tooling (Docker, Supabase, LiveKit, PostgreSQL, PNMP, Yarn)
- LiveKit Cloud for global SFU + worker dispatch; jobs spin up on demand
- Supabase for Postgres, auth, storage, and realtime channels
- uv for Python dependency and virtualenv management; Yarn/pnpm for JS apps
- Dockerfiles for the agent; ready for container platforms or edge boxes
- Logs and metrics via LiveKit worker events (job lifecycle, speaking state)
5) Security & Reliability (UV, LiveKit Cloud, Supabase, Docker)
- Room isolation: each session is scoped to a LiveKit room; minimal data shared
- Supabase RLS (ready to tighten), service‑role key used only server‑side in the agent
- Dockerized agent for consistent runtime; uv lockfile for fully pinned deps
- Environment separation: local dev (.env.local), LiveKit Cloud, Supabase project
- Graceful failure paths and cooldowns to avoid repeated or noisy alerts
6) Why this architecture?
- WebRTC + LiveKit gives low‑latency, interruption‑friendly voice/video and a simple “agent as participant” model
- Supabase provides a frictionless, auditable event store with realtime updates
- Modular apps let us evolve each surface independently: upgrade the agent, refine the dashboard, or change publishers without breaking the whole system
Challenges we ran into 🚧
We faced several different technological challenges during this development timeline:
- SDK gaps: No exposed hook to capture the agent’s TTS audio, so we can’t rebroadcast the exact voice as MP3; we had to rely on transcripts or canned audio.
- Latency and responsiveness: Turn detection plus our cooldown window made replies delayed after speech; needed tuning.
Accomplishments that we're proud of 🏆
Our team pushed a fully-functional and scalable real-time data analytics Operating System that also serves as the copilot for cops within <48 hours.
We are proud of:
- Built a truly real‑time safety loop: ingest bodycam video/audio, score risk, voice agent with ≤10‑word English directives, and log incidents in seconds.
- Clean, modular architecture: LiveKit agent, dashboard, and camera tools work independently yet snap together in one room.
- Operator‑grade UX: Supabase CDC powers live incident feeds, stats, and a mobile officer view that update instantly without polling.
- Practical de‑escalation focus: prompts and outputs tuned to reduce unnecessary force by naming the risk and suggesting calm, actionable steps. and much more!
What we learned 📚
- Real‑time distributed systems: coordinating an “agent as a participant” with clients taught us job dispatch, back‑pressure, and failure isolation across SFU + workers.
- Networking & media protocols: operating over WebRTC clarified how bitrate adaptation, jitter buffers, and data/text streams shape latency and reliability.
- Stream processing & consistency: Supabase CDC and LiveKit transcripts forced us to reason about ordering, idempotency, and eventual consistency in UI state.
- Safety‑critical AI interfaces: prompt design is a systems problem—tight schemas, thresholds, and cooldowns matter as much as model quality for predictable behavior.
What's next for COPilot 🚀
Next, we’re scaling COPilot from a single-camera assistant to a distributed safety network, where risk detection and scene understanding are partitioned across body cams, dash cams, drones, and station gateways, then fused into a single operational picture for officers and our Real-Time Data Analytics OS. On the business side, we’re standardizing deployment, packaging tiered pricing, and building repeatable go-lives with a certified integrator ecosystem. We’ll prioritize multi-agency pilots that prove measurable ROI, then expand via regional procurement frameworks and reseller channels to compress sales cycles. To unlock at-scale adoption, we’re pursuing CJIS-aligned controls, SOC 2, and tamper-evident evidence chains; adding federated learning with differential privacy to improve models without centralizing video; and delivering admin SDKs/APIs for CAD/RMS and radio/voice integrations. The result: network effects as more devices cooperate, clear procurement value stories, and a path to city-wide, and ultimately state-wide, rollouts.





Log in or sign up for Devpost to join the conversation.