Part 1: Devpost Submission
Inspiration
Fear is universal. Whether it's the dread of failure, loss, or the unknown, our nightmares shape our waking lives in profound ways. Traditional therapy approaches fears intellectually, but research shows that embodied experiences - physically confronting what scares us - creates deeper, lasting transformation.
We asked: what if you could literally face your nightmare, watch it manifest in augmented reality, and with the help of a friendly robot companion, transform it? Drawing from Harry Potter's Boggart concept (a creature that takes the form of your deepest fear), we created BogARt - an immersive AR installation where technology becomes the bridge between your inner world and physical reality.
What it does
BogARt is a guided fear-confrontation ritual combining:
- Rishi, an expressive robot companion (Reachy) who greets you and draws out your nightmare through conversation
- Snap Spectacles that render your personalized nightmare as an AR creature
- A physical "Fear Box" with sensors that responds to your interactions
- Ambient lighting and sound that transforms the space based on the narrative
The 8-Beat Journey
- THRESHOLD - Enter the installation, curiosity builds
- INVITATION - Meet Rishi, who warmly welcomes you
- CONFESSION - Share your nightmare through natural conversation
- ANTICIPATION - The room shifts, something is coming...
- MANIFESTATION - Your nightmare appears in AR, rendered uniquely to your fear
- FACING - Confront the creature with Rishi's encouragement
- TRANSMUTATION - Watch your nightmare dissolve and transform
- INTEGRATION - Reflect on the experience, carry the lesson forward
The entire experience adapts in real-time based on your responses, supporting 55+ languages through OpenAI's Realtime API.
How we built it
- Hardware Integration - We integrated WebSocket and REST API calls to read and trigger motion in both Inertial Motion Measurment Units, Robotic Protocols, LED Lighting Signals and Low Frequency Reverberation Sound Wave Generation to Push Transducers that will rattle your nerves. We setup a private network infrastructure that allowed for a movable and reconfigurable setup to reduce failure points. Generated custom python web servers and html interfaces to test and then manage interactions across multiple pieces of hardware to create a repeatable and event ready platform that had fallback triggers in case any one element started to fail, a critical element in doing any live production experience.
Hardware Components
- Snap Spectacles (5th Gen) - AR display with spatial AI and mesh scanning
- Reachy Robot (Rishi) - Expressive humanoid upper body with custom emotion/gesture library
- ESP32 Microcontroller - Central WebSocket server handling door sensors, rotation detection
- Fear Box - Custom-built enclosure with physical triggers
Software Stack
- Lens Studio - TypeScript-based AR development
- OpenAI Realtime API - GPT-4o-realtime for natural voice conversation (24kHz PCM16 audio)
- Semantic VAD - Voice activity detection with configurable eagerness
- Node.js - State management server (Pamir Distiller)
- TouchDesigner - Real-time audio/visual environment control
- WebSocket Protocol - Low-latency communication between all components
AI Integration
- Real-time voice-to-voice conversation with <300ms latency
- Function calling enables the AI to trigger story beats, control robot emotions, and spawn AR creatures
- System prompts crafted for therapeutic, encouraging dialogue
- Multi-language support through OpenAI's native language handling
Challenges we ran into
The Great Git Nuke - Midway through the hackathon, our repository got corrupted. We had to run git fsck and manually reconstruct commits (yes that is a real command, yes we found it absolutely hilarious). Lesson learned: commit early, commit often, push immediately (BUT NOT SCENE.SCENE IN A LENS STUDIO PROJECT OTHERWISE EVERYTHING WILL FALL APART).
Lens Studio Scene Graph Learning Curve - Coming from Unity/Unreal, the Lens Studio scene graph operates differently. Understanding the relationship between SceneObjects, Components, and the update loop took significant iteration. We also struggled with the proper collaboration workflows.
Hardware Synchronization - Getting the Spectacles, robot, sensors, and lighting to respond in harmony required careful WebSocket message choreography. Race conditions were our nemesis. This also introduced the need for fallback systems to account for any real time hiccups to create a better story for the users.
Physical Installation Constraints - Without a place to keep testing in the late evening (past 11) we had to also scan and test in Pratik's hotel which created conflicts with some of the interactions on the last day. This introduced the need to engineer a portable network harness that could contain all the elements and allow them to move seamlessly between home, hotel, and institutional networks without the need to be reconfigured each time. Robot Personality Design - Making Rishi feel warm and trustworthy while discussing fears required careful prompt engineering. Too clinical felt cold; too casual undermined the gravity of the experience.
Accomplishments that we're proud of
- True Multi-Device Synchronization - Spectacles, robot, sensors, and environment all responding as one unified experience
- ** A True integration of Augmented technology into the physical world, creating moments of seamless engagement with tangible outcomes instead of just holo gestures and air swipes. We engaged the Reality of Augmented Reality and brought real surprise and excitement to users, in some cases legitimately frightening them with nothing more that sound waves and photons.
- Sub-300ms Voice Response - Natural conversation that doesn't break immersion
- 55+ Language Support - A user can share their nightmare in Mandarin, Spanish, or any supported language
- Emotional Robot Expressions - Rishi has a full library of emotions (happy, sad, curious, surprised, excited) and gestures that respond to conversation context
- The Pivot That Worked - We changed direction three times and emerged with something more compelling each iteration
- Team Chemistry - Five strangers became hella tight crew who genuinely enjoyed building together
What we learned
- Pivot with purpose - When something isn't working, kill it quickly. Our best ideas came from the ashes of abandoned approaches.
- Physical prototyping matters - No amount of digital planning replaces having materials in hand. Bring more supplies than you think you need.
- Lens Studio is powerful but different - It's not Unity. Embrace its paradigms rather than fighting them.
- Real-time AI changes everything - Voice-to-voice conversation with function calling enables experiences that felt impossible two years ago.
- Embodiment amplifies emotion - Seeing your nightmare in AR while a robot encourages you hits different than a screen-based experience.
What's next for BogARt
Location-Based Entertainment (LBE)
- Partner with escape rooms, haunted houses, and immersive theater companies
- Develop a modular installation kit that venues can license
- Create themed variations (horror, fantasy, sci-fi aesthetic options)
Therapeutic Applications
- Collaborate with exposure therapy researchers to validate efficacy
- Develop clinician dashboard for session monitoring
- Create structured protocols for phobia treatment (spiders, heights, public speaking)
- Explore PTSD applications with trauma-informed design
Technology Expansion
- Port to Meta Quest, Apple Vision Pro for broader reach
- Integrate biometric feedback (heart rate, galvanic skin response) to adapt pacing
- Build a library of procedural-generated nightmare visualizations
- Add multiplayer support for group therapy sessions
Content & Personalization
- Train custom models on fear archetypes for richer nightmare generation
- Develop post-experience integration tools (journaling prompts, follow-up conversations)
- Create an API for third-party developers to build on the platform
Built With
- 3d-printing
- ar
- blender
- esp32
- gpt-4o
- javascript
- lens-studio
- mesh-scanning
- node.js
- openai-realtime-api
- python
- reachy-robot
- semantic-vad
- snap-spectacles
- spatial-ai
- touchdesigner
- typescript
- voice-ai
- websockets

Log in or sign up for Devpost to join the conversation.