Inspiration
Growing up, I often didn’t tell my parents about the small things that bothered me at school. It wasn’t that they didn’t care, it just didn’t feel big enough to bring up. By the time they found out, it was usually because something more serious had already happened.
That experience highlighted the gap between what children go through during the day and what parents actually know. We were inspired to create a system that helps surface early signs of distress, not to monitor every moment, but to give parents the opportunity to step in and support their child before issues escalate.
What it does
Rocky is an intelligent, privacy-first companion and monitoring system that identifies early signs of distress, bullying, or physical emergencies. It continuously analyzes a combination of biometric data (like heart rate spikes from a wearable) and ambient audio.
When the system detects a potential issue, ranging from subtle verbal bullying to a physical fall, it instantly alerts parents via a real-time dashboard. The dashboard provides a safe, filtered summary of the incident, an AI-generated assessment of the severity, and a clear, step-by-step action plan to help parents handle the situation appropriately.
How we built it
We architected Rocky with a heavy emphasis on data privacy and real-time processing:
- Frontend: Built using Next.js 16 and React 19, styled with Tailwind CSS to create a premium, calm, and highly responsive dashboard for parents.
- Backend & Real-time Data: We used Supabase for secure data storage, authentication, and real-time WebSocket subscriptions to push emergency alerts to the dashboard instantly.
- Smart Filtering: We implemented an edge-level keyword and heuristic audio filter. Before any heavy AI processing happens, this filter catches distress phrases or sudden biometric spikes to save bandwidth and ensure we only analyze relevant moments.
- Private AI Infrastructure (Vultr Track): To ensure maximum privacy for sensitive child data, we completely avoided public LLM APIs. Instead, we provisioned a powerful, private, low-latency compute instance on Vultr (128GB RAM). We deployed Ollama on this Vultr instance to run the massive Mixtral-8x22b model entirely in our own controlled environment. This allows us to perform deep contextual analysis of transcripts and sensor data without compromising student privacy.
How it works
- Audio Capture: The child's wearable continuously records a rolling 30-second buffer (overwriting itself)
- Speech-to-Text: Each chunk goes to ElevenLabs Scribe v2 for transcription
- Keyword Pre-Filte: A sub-millisecond filter scans for 115+ distress words/phrases. If nothing matches, the chunk is silently discarded (free, fast, saves LLM costs)
- Private LLM Analysis: Flagged transcripts go to a self-hosted Mixtral-8x22b model on our own Vultr CPU server. The LLM considers time context (school hours?), sensor data (heart rate elevated?), and the transcript itself to generate a nuanced severity rating + actionable summary
- Incident Created: The analysis is saved to Supabase with audio clip, transcript, severity, summary, categories, and suggested actions
- Instant Parent Alert: The database INSERT triggers three simultaneous notification layers: in-app dropdown, cross-tab sync via Postgres CDC, and native browser push notifications (works even when minimized)
Challenges we ran into
One of our biggest hurdles was balancing the need for deep, contextual AI analysis with the absolute necessity of child privacy. Sending ambient audio from a classroom to a public AI model (like OpenAI) was out of the question due to privacy concerns. We had to figure out how to host a massive, highly capable open-source model ourselves. Getting Mixtral-8x22b running efficiently with low latency required careful configuration, custom prompt engineering, and utilizing Vultr’s high-performance infrastructure.
Another challenge was tuning the detection algorithms to avoid false positives. We spent a lot of time refining the trigger thresholds for our keyword filters and teaching the LLM to differentiate between playful playground banter and actual aggressive behavior by providing it with localized temporal context.
Accomplishments that we're proud of
- Zero-Compromise Privacy: Successfully deploying a private LLM on Vultr that can analyze highly sensitive data with deep conversational context, completely disconnected from public data-mining companies.
- Intuitive Design: Designing a frictionless, premium UI that doesn't just throw raw, panic-inducing data at parents, but organizes it neatly with "Rocky" the mascot, reassuring summaries, and actionable steps.
- Real-time Architecture: Building a completely live event pipeline from the simulated hardware capture, through the Supabase real-time channels, right into the parent's dashboard as a global toast notification.
What we learned
We learned a tremendous amount about self-hosting large language models and the nuances of prompt engineering for classification tasks. Getting an LLM to reliably output structured JSON analysis of a transcript while cross-referencing biometric heart-rate data pushed us to deeply understand how context windows and system prompts interact. We also learned how to leverage Supabase's real-time features efficiently in a modern Next.js App Router environment.
What's next for Rocky
- Hardware Integration: Transitioning our software from a simulation studio to integrating with actual wearable hardware (like a smartwatch or dedicated safety pin) using Bluetooth Low Energy (BLE).
- Location Tracking: Implementing a secure, geofenced location sharing feature so parents know exactly where an incident is occurring.
- Long-term Trend Analysis: Using the AI to look at weeks or months of data to detect gradual behavioral changes or ongoing, subtle bullying that doesn't trigger an immediate "critical" alert but warrants a parent-teacher conversation.
Built With
- elevenlabs
- mixtral8x22b
- next.js
- ollama
- supabase
- tailwind
- typescript
- vercel
- vultr

Log in or sign up for Devpost to join the conversation.