Inspiration

Growing up, we’ve always wanted a safe space that would welcome our emotions, whether they were negative or positive. We came together to create In My Feels to be in our feels, for real. Inspired by the positive effects of BeReal and the short-term effects of group singing versus listening on mood and state self-esteem, it's an authentic social media platform that encourages empathy and emotional support.

What it does

In My Feels is a emotional safe space where there is no right or wrong way to express emotions. After singing any melody that resonates with you into an orb, In My Feels uses machine learning to analyze the tone and detects our emotions. When emotion is detected, it changes color and floats to the cloud. The cloud is a collection of emotions from your community. You and your friends can upload multiple emotions daily, and a private log will help you reflect on your emotions over time.

How we built it

-We used a CNN model to analyze spectrometry graphs from WAV files to analyze emotion for SER

  • We developed audio-reactive "blobs" in the environment by using ShaderGraphs to control Vertex shaders, Normals, and Fragment shaders to create jelly-like holographic and colorful orbs.
  • We used Uvicorn and Flask to create an API to call the trained model for evaluation on new WAV streams.
  • We used Figma to prototype the user experience and visuals.
  • There were a lot of booleans in GameManager.

Challenges we ran into

  • Deciding the type of soundscape and finding audio feedback that is a good it.
  • Getting hung up on hardware issues when there were alternative quick fixes.
  • Building solved some problems but created others - we couldn’t access our local server anymore!
  • Finding the right way to guide the user through the audio input experience.

Accomplishments that we're proud of

  • Completing a Wizard of Oz run-through early on and check in's helped the team stay on the same page regarding the design and vision.
  • Finding the sweet spot between a challenge and an achievable scope.
  • Getting smooth vertex shader running on both eyes.
  • Our 61% accurate machine learning model is close to the 66% academia standard for Speech Emotion Recognition (SER) through CNN models.

What we learned

  • Quickly animate motion graphics for videos.
  • How to build Spacial Audio Soundscapes on Unity.
  • Learned what shader graphs are.
  • Learned how to use Async Functions in a production-ready application.
  • Learned to implement animations and smoothing through math.

What's next for In My Feels

Completing the social network experience for In My Feels. Building the personal emotional log with weekly, monthly, and yearly views. Building the interactive point cloud experience.

Built With

Share this project:

Updates