✨ LUCID WEAVE
Don’t just play music. Weave it. Wear it. Be it.
✨ Inspiration
In lucid dreams, intent and reality collapse into one.
Lucid Weave was born from the collision of two distinct obsessions during RH team formation.
The Invisible Instrument: The desire to pull melodies out of thin air, turning empty space into a synthesizer.
The Dress of Dreams: The drive to make code wearable, manifesting digital signals into a physical, glowing garment all while combining fashion and technology.
We realized that separately, these were just cool tech demos. But combined? They became a new form of existence. We stopped asking "how do we build an interface?" and started asking "how do we inhabit the music and give ourselves the permission to dream out loud”
We moved away from "interface" and toward "inter-being."
🎮 What It Does
Lucid Weave is a spatial performance system that dissolves the boundary between the digital and the physical. It allows performers to paint music in the air and manifest it on their own bodies instantly.
🎨 Spatial Music Painting
The air around you is no longer empty space. It is a canvas. Using Snap Spectacles, hand gestures sculpt melodies and harmonies in 3D space. You do not trigger samples; you "pull" sound from the ether, shaping pitch and resonance with the sweep of an arm. Music is the ultimate form of self-expression, and we built this technology so it adapts to you, not the other way around.
💡 Visual to Physical Manifestation
This is not just an AR overlay. When you paint a melody, it appears as volumetric light trails in the headset, but simultaneously manifests on a custom LED dress tailored from the ground up. The digital magic bleeds into the real world.
🧘 Real-Time Embodiment
The performer becomes the instrument. The dress pulses with the rhythm of the synthesis. The visuals trail the movement of the hand. Music is no longer played. It is worn.
⚙️ How We Built It
Lucid Weave is a complex, three-layer system combining spatial input, ultra-low latency synchronization, and wearable hardware engineering.
1. Spatial Interface (Snap Spectacles)
We utilize the hand-tracking capabilities of the Snap Spectacles to map the 3D coordinate system to musical theory using a custom Lens Studio project.
2. The Serverless Nervous System
To maintain the illusion of magic, latency must be imperceptible. We utilize Supabase Realtime as a global broadcast relay. It acts as the central nervous system, bypassing traditional database writes in favor of ephemeral broadcast channels. This allows us to sync the AR glasses, the web audio engine, and the physical hardware with sub-30ms latency.
3. Physical Manifestation (Wearable Hardware)
The dress is alive. A custom LED garment, driven by a Seeed Studio XIAO ESP32-S3, powered by Arduino listens to the Supabase broadcast. It translates spatial parameters into dynamic light patterns using PWM control.
- High Notes: The dress flares into Electric Pink.
- Low Notes: The dress settles into a deep Amethyst Purple.
🚧 Key Challenges (And How We Beat Them)
🔊 The Chaos of Sound vs. The Order of Theory
Random hand-waving sounds like noise. We had to implement Harmonic Constraints. We mapped the X-axis specifically to the Sargam Scale (and Major Pentatonic), creating "safe zones" in the air. This ensures that no matter how wild the gesture, the output is always harmonious and musical.
📉 The Pivot: WebSockets vs. Supabase
We initially built our comms layer using a standard WebSocket API. It worked, but it felt "heavy"—the lag broke the immersion. Mid-hack, we pivoted to Supabase Realtime. By utilizing their lightweight Broadcast feature, we shaved off critical milliseconds, transforming the experience from "remote control" to "instant extension of the self."
🪡 👗Couture Meets Circuitry
Hardware is hard; wearable hardware is harder. We weren't just plugging in a strip; we tailored a dress from scratch. Managing power distribution for a full LED array from a wearable battery powered by Arduino while keeping the ESP32 form factor invisible was a massive balancing act of aesthetics and engineering.
🏆 What We’re Proud Of
- Physicality: We didn't just stay in the headset. We brought the hologram out into the real world via the custom dress.
- Accessibility: We built an intuitive spatial Do-Re-Mi interface. You do not need to be a musician to play it. You just need to move your muscles.
- Aesthetics: We achieved an ethereal, dreamlike aesthetic that feels magical rather than technical.
🧠 What We Learned
Spatial computing is a biological feedback loop.
When sound is seen and felt, immersion becomes embodied. We learned that the most powerful interfaces are not just visible, they are experienced. We learned that "magic" in tech is simply the result of hiding the latency and removing the friction between intent and action.
🚀 What’s Next: The Dream Expands
For centuries, music has been humanity’s bridge to the dream world, from Native American ceremonial drumming, to Tibetan singing bowls used in meditation, music has always been the language that helps us transcend our waking reality and access deeper states of consciousness and connection. The architecture we’ve built is designed for something far greater for the future: a connected universe of musical dreamers..
1. 🤝 The Symphony of Shared Dreams
We are expanding the backend to support "Shared Rooms." Imagine a spatial jazz quartet where one performer sculpts the bassline in red light while another weaves blue arpeggios around them.
2. 🤖 The Kinetic Echo
Motion is a universal language. The X/Y/Z data broadcast by our system is platform-agnostic.
- Reachy Robot Integration: We are effectively building a "Digital Puppeteer." We plan to network a Reachy humanoid robot to subscribe to the same motion stream, mirroring the performer's gestures in a dance of silicon and soul.
3. ⚡ The Phantom Touch
We plan to weave haptic engines directly into the garment's cuffs, allowing the performer to feel a tactile "click" when they pluck a virtual string in the air.
4. 🦎 Chameleon Identity
Integrating real-time AI to listen to the musical output and change the visual "material" of the dress on the fly. The garment doesn't just react to the note; it reacts to the vibe.
Built with 💜 by Team Dreamcatch-ARs at MIT Reality Hack 2026
Abraham: The Hardware Engineering Mad Scientist
Aishah: The Fashion Technologist & Dress Concept Artist
Meghna S: Dress concept artist, creator and the Spec-tacular Queen
Krunal MB Gediya:The Spec-tacular Guy Doing All Things Krazyy
Built With
- arduino
- argb
- esp32
- lensstudio
- spectacles
- supabase






Log in or sign up for Devpost to join the conversation.