TABLE: 108C 🔗 Full Technical Breakdown & Architecture Diagrams: 👉 Read the full Notion write-up here (includes system diagrams, tech breakdown, model architecture, and VR shader functions)

Inspiration

According to the World Health Organization (2023), anxiety disorders affect over 301 million people worldwide, and attacks can peak within just 10 minutes, which is long before a person can reach help. Yet, over 60% of people never receive real-time support during an episode, which often leads to long-term health complications and a lower overall quality of life.

Devices and apps today have come a long way in detecting and tracking anxiety, but most only provide mindfulness after the fact. They log data but fail to effectively intervene when it matters most.

Anxiety doesn’t schedule a Teams meeting on your calendar. It doesn’t announce itself or wait until you’re ready. And once it consumes you its hard to break out of the loop.

That’s why we built Haven: a system designed to bridge the gap between detection and intervention, bringing adaptive, AI-driven calm directly into your reality.


💡 What It Does

  • Always-on companion that listens to voice and heart rate to detect early signs of anxiety passively.
  • Classifies states as: 🩵 Normal | 💛 Uneasy | ❤️ Distressed.
  • When distress is detected, Haven:
    • Activates AI-guided conversation (voice-based, no manual input).
    • Offers gentle breathing cues, grounding talk, and emotional support.
    • Launches a VR overlay that generates an agentically visualized, procedurally-generated, personalized environment, turning your environment into a calming space. The environment dynamically adapts to the users requests, creating a calming, soothing atmosphere personalized to the user.
  • All voice and visual responses adapt in real time to heart rate and speech tone.
  • Haven is the only tool that’s fully voice-driven, personalized, and automatic, using no buttons or manual cues, just voice and passive detection.

🛠️ How We Built It

  • LiveKit for real-time audio streaming and bi-directional speech.
  • Claude API for contextual emotion classification and dialogue generation.
  • Apple Watch for continuous heart-rate tracking.
  • Unity (OpenXR) + Agentverse for VR overlays with spatial mapping, procedural generation, and dynamic environment augmentation.
  • Flask + Redis + NATS for orchestration and multi-agent communication.
  • Specialized agents:
    • 🎵 Music Agent – creates HR-synced ambient audio.
    • 🗣️ Voice Agent – real-time empathetic speech with barge-in detection.
  • Full system latency: ~0.9 seconds from detection → first calming response.

⚙️ Challenges We Ran Into

  • Integrating 2 backends, 3 devices, and 4 AI agents into a single low-latency loop (spoiler: it was miserable).
  • Building our first-ever VR shader system — endless tuning to make real rooms feel softer without motion sickness.
  • Making audio contextual, not keyword-based — teaching Haven to understand tone, breathing, and phrasing, not just words.

🏆 Accomplishments We’re Proud Of

  • Built our first working AI therapist that reacts in real time — no manual triggers.
  • Designed a generative VR system that augments real spaces instead of replacing them.
  • Achieved sub-second end-to-end latency across devices.
  • Created an emotionally intelligent voice agent that feels natural and calming.
  • Integrated heart rate, voice, music, and visuals into one closed feedback loop.
  • And yeah — none of us had touched VR before this hackathon.

📚 What We Learned

  • Empathy can be engineered — emotional safety and system latency are deeply connected.
  • Context > keywords — tone and breathing reveal more than words ever can.
  • Real-time systems require ruthless attention to synchronization and timing.
  • VR design is less about spectacle and more about comfort and trust.
  • Debugging anxiety with 3 backends and a panic simulator might actually cause anxiety (lesson learned).

🚀 What’s Next for Haven

  • Expand beyond VR → mobile AR & wearable versions for everyday accessibility.
  • Pilot study (10–20 users) to measure HRV and stress reduction over 2 weeks.
  • Move toward on-device emotion models for privacy-first inference.
  • Build personalized calm profiles that adapt to each user’s physiology.
  • Long-term vision: make Haven a global early-intervention layer for mental health — instant, intelligent, and everywhere.

Built With

Share this project:

Updates