Inspiration

“Feel the rain on your skin No one else can feel it for you Only you can let it in” - Unwritten

Traditional therapy is notoriously flawed. People often can't describe how they feel, or they misinterpret their partner's silence as "anger" when it’s actually "anxiety." Communication fails because even when feelings are explained, they’re not really felt or understood, resulting in a 50% rate of recurrence after traditional therapy ends.

“Rain With Me” invites you to feel how someone else feels. We're turning 'I hear you' into 'I feel your feelings.'

Using biometrics and visual/haptic feedback, we created a biosensing empathy tool that can let someone else feel your emotions as rain on their skin, in a similar vein as the “constructive interference” condition of palpable empathy from Octavia Butler’s science fiction classic “Lilith’s Brood”, where feeling becomes a shared, embodied state rather than an abstract one.

Our shared vision is to create technology that expresses emotion and helps people connect on a deeper level. How do we connect the intellectual understanding of emotion with the viscerality of its expression? Or communicate with people who can’t communicate with us – a partner that’s far away, a parent without a shared language, a grandparent or child who can’t speak?

We want this experience to inherit the calming essence of nature, so rain becomes our metaphor for emotions. Combined with a biomimetic aesthetic, participants are embodied as two frogs in the rain. For a brief moment, communication takes a different form. Like a gentle rain on a frog’s parched skin, the heat of a conversation begins to dissipate.

What it does

Rain With Me is a biometric empathy bridge that translates internal emotional states into an external shared reality between two people.

The system captures real-time physiological data (Galvanic Skin Response and Heart Rate) and audiovisual cues from one user (the Sender) via the Google Gemini Live API. This data is processed through a custom sensor fusion engine to determine this user's arousal (intensity) and valence (positivity/negativity).

This emotional state is instantly broadcast in two forms:

  1. Immersive AR Visualization: A dynamic weather system in Unity where rain intensity mirrors the Sender's stress levels.
  2. Haptic Feedback: When a second user (the Receiver) reaches out to "touch" the virtual rain, hand-tracking triggers vibration motors, allowing them to physically feel the volatility of their partner's emotions.

How we built it

We engineered a low-latency hub-and-spoke UDP architecture to synchronize hardware, AI inference, and AR visualization with under 50ms of latency.

  1. The Sensor Layer & Affective Computing We utilized the Circumplex Model of Affect to map emotion.

    • Hardware: We built custom biosensors using an Arduino UNO and ADS1115 to capture raw Galvanic Skin Response (GSR) and photoplethysmography (PPG) data.
    • AI Inference: We utilized the Google Gemini Live API to perform multimodal sentiment analysis on the user's voice and facial expressions in real-time.
  2. The Python Fusion Engine The core of our backend is bridge.py, a central Python controller that aggregates asynchronous data streams. It uses a weighted fusion algorithm to calculate "True Arousal": Final_Arousal = (GSR_Spike * 0.75) + (AI_Inference * 0.25) We prioritized physiological hardware data (the body's truth) over AI inference, using the AI primarily to contextualize the biological spikes.

  3. Networked Experience (Unity & Haptics)

    • The fusion engine broadcasts the calculated emotional state via UDP to our Unity frontend.
    • Unity AR: We used the XR Interaction Toolkit to render the rain. A custom script, BioReceiver.cs, modulates particle velocity based on the incoming UDP packets.
    • Haptic Feedback: We implemented hand-tracking to detect when the user touches the virtual rain. Upon contact, the system sends a signal to an ESP32 microcontroller, which drives haptic motors using PWM to simulate the sensation of rain hitting the hand.

Challenges we ran into

One of the biggest challenges we faced were in designing the project and setting a primary context for the idea on the first day. We brainstormed many different ideas while considering the strengths and drawbacks of various devices like VR headsets, smartglass, and AR glasses depending on what kind of experience we wanted to create – a long distance VR headset, or a natural conversation in smartglass? We pivoted devices several times, including the Samsung Galaxy XR, RayNeo, and Snap Specs, as we came to understand what would work or would not work to meet the vision of our project. We also had to rescope throughout the project.

A technical challenge we ran into was fusing asynchronous data streams. The biosensors stream at a high frequency, while the Gemini API calls are slower and strictly event-based. We had to write a non-blocking Python bridge that could buffer the fast hardware data while waiting for the AI inference to return, ensuring the Unity simulation remained smooth without "stuttering" while waiting for API responses.

Accomplishments that we're proud of

Achieving the physical wearables and virtual visuals that we set out to do wasn’t easy, but we managed to keep both of these main components. We all tried something new and worked together well as a team. We're proud of being able to pivot quickly and be flexible with our initial idea in order to reach a prototype within the time limit. We created our own assets, designed our own interactions and put in effort to create UX polish. This multisensory experience is engaging for the user and aesthetically pleasing as a designed solution that brings mindfulness and compassion in communication.

What we learned

During the concepting stage, it’s better to do it in a top down way rather than bottom up. It’s definitely useful to learn about all the different tools we can use, but if we understand how to find the key questions to ask, in this case, “Is the Quest Link reliable on this device?” It can save us a lot of time. Making an interactive experience in the 3D space is fun, but it also comes with new challenges for UX design. The approach to make it intuitive is drastically different from designing for a 2D screen in one’s hand.

What's next for Rain With Me

As a biosensing empathy tool, we see this project being useful in the healthcare/therapy space for allowing people with communication issues or disabilities to express their feelings. We hope to iron out a few more bugs and develop more types of visual and emotional feedback, such as adding flower or plant interactions to better visualize the metaphor of a garden.

Share this project:

Updates