Inspiration
We started from a simple but scary reality: in many crisis situations, the safest thing a person can do is not speak. The viral “domestic violence crisis hand signal” showed how a tiny gesture on video can literally save someone’s life – but only if someone on the other side notices it in time. We wanted to build a system that never blinks, never gets distracted, and can spot that silent signal even when a human might miss it.
SignalSafe AI is our attempt to “reinvent the wheel” of campus safety: instead of panic buttons buried in apps or websites that no one opens during an emergency, we use the cameras that already exist in student spaces plus the hand signals people can perform silently.
What it does
SignalSafe AI continuously watches live video feeds (like a laptop webcam or a phone camera) for the crisis hand signal. When it detects a high probability of distress, it:
Highlights the camera tile with a red, blinking alert state.
Pops up a High Priority Alert panel with time, camera, and location.
Lets an operator either escalate to Campus Safety or mark it as a false positive, which gets logged with their name and timestamp.
Behind the scenes, every frame is analyzed using hand landmarks and a machine‑learning classifier that distinguishes normal hand positions from the crisis signal. The system is designed so that “silent hand → automated detection → rapid human decision” happens in seconds, not minutes.
How we built it 
On the front end, we built a web‑based SOC dashboard in HTML, CSS, and JavaScript. It:
Captures frames from the browser’s camera using
getUserMedia.Draws each frame to a hidden
<canvas>and converts it to a JPEG data URL.Sends each frame every ~800 ms to a Flask API endpoint (
/api/detect).Reacts to the API’s
distressflag by animating the camera tile and showing the alert card, and logs false positives in a separate panel.
On the backend, we used Flask, OpenCV, MediaPipe, and a scikit‑learn classifier:
Flask receives base64‑encoded images from the browser and decodes them with OpenCV.
MediaPipe’s Hand Landmarker extracts 3D hand landmarks (21 points × x,y,z).
We flatten those landmarks into a 63‑dimensional feature vector.
A trained classifier (
distress_classifier.joblib) predicts p(distress)for each frame.The API returns JSON:
{ "distress": bool, "prob": float }, which the dashboard uses to decide when to alert.
We also wrote a standalone Python listener (run_listener_tasks.py) that runs locally on a machine, using the same model, to experiment with continuous detection, threshold tuning, and user confirmation via keyboard.
Challenges we ran into
Our biggest challenges were part technical, part “hackathon reality”:
Camera juggling on macOS: Continuity Camera likes to make the iPhone the default camera, which confused both OpenCV and the browser. We had to fight device IDs, ports, and “camera already in use” errors to get a reliable feed.
Real‑time performance vs. reliability: Sending frames every 800 ms over HTTP, running MediaPipe, and classifying them had to be fast enough not to feel laggy, but still stable on a typical student laptop.
Threshold tuning: If the threshold is too low, the dashboard screams constantly; too high, and it misses genuine distress. We iterated thresholds and “consecutive frame” counts to find a reasonable balance for a demo.
Frontend/Backend wiring: Tiny mismatches (like wrong element IDs or cached HTML) caused null pointer errors and made it look like “the camera is broken” when it was really a DOM issue. Debugging those under time pressure was its own mini‑boss battle.
Accomplishments that we’re proud of
We’re proud that SignalSafe AI is a full end‑to‑end system:
A live SOC‑style dashboard that actually talks to a real ML backend.
Real‑time hand landmark detection and classification running from a plain browser camera.
A clean operator workflow: see the alert, confirm or dismiss, and log the outcome.
We’re also proud of how much we learned about practical constraints: cameras, browser security, model deployment, and UI/UX for safety‑critical alerts—all in one weekend.
What we learned 
Technically, we learned:
How to use MediaPipe Tasks for hand landmark detection and feed those landmarks into a custom classifier.
How to build a stateless Flask API that handles base64 images, OpenCV decoding, and ML inference.
How to integrate browser cameras, canvases, and async fetch loops into something that feels like a live monitoring dashboard.
Design‑wise, we learned that:
Alert UX matters: confirmations, cooldowns, and false‑positive logs are essential so operators don’t start ignoring the system.
The same camera feed can be used in very different ways—local listener scripts vs. web dashboards—if the core signal (hand landmarks) is well‑designed.
What’s next for SignalSafe AI
Next, we’d like to:
Support multiple cameras in a robust way (e.g., one laptop webcam plus a dedicated phone camera), with the alert panel labeling exactly which feed triggered.
Add more gestures and contexts, like additional distress signals, maybe even sign language cues for help.
Integrate with real notification channels (SMS, email, or campus safety systems) instead of just a
tel:link.Improve model robustness with more training data, augmentations, and adversarial testing so it performs well under different lighting, camera angles, and skin tones.
Explore privacy‑preserving modes, like running everything on‑device or only logging features instead of raw images.
SignalSafe AI started as a hackathon project, but the idea of a quiet, reliable listener for crisis hand signals is something we’d like to keep pushing beyond this weekend.
Built With
- css
- flask
- javascript
- mediapipe
- opencv
- python
- python-(backend-logic
Log in or sign up for Devpost to join the conversation.