Inspiration
MoodNest was inspired by the idea that our homes should do more than respond to voice commands. They could respond to this. We wondered what it would look like if a home could sense your emotional state and support you the way a friend might. With AI becoming increasingly multimodal and emotionally aware, we wanted to explore how technology could shape environments that feel more human, empathetic, and adaptive.
What it does
MoodNest listens to your voice, analyzes the emotion behind it, and instantly adjusts a 3D home environment to match how you feel. When you speak, the system detects your emotional tone and linguistic sentiment, classifies your mood, and updates the lighting, colors, and music in a real‑time 3D apartment scene. Due to time constraints, the environmental changes mimic the emotion, but with more emotion categories and triggers to bring up mood, MoodNest transforms a reactive smart home into one that feels thoughtful and emotionally aware.
How we built it
We built MoodNest as a full‑stack demo within the hackathon timeframe, connecting audio processing, AI reasoning, and real‑time 3D rendering.
Frontend
- React for a clean, responsive interface
- React Three Fiber + Three.js to render a custom 3D apartment model
- WebGL‑based lighting updates driven by AI output
- Audio recording and upload flow
- Smooth transitions between emotional presets using React state and interpolation
Backend
- FastAPI server exposing REST endpoints
- Receives and preprocesses audio recordings from the frontend
- Gemini 2.5 Flash‑Lite for sentiment analysis and linguistic mood classification
- Maps emotional output to four presets (Happy, Sad, Angry, Neutral)
- Converts emotional scores into lighting + music parameters
- Generates empathetic spoken responses using ElevenLabs
- Returns a structured JSON environment state to the frontend
Challenges we ran into
- Getting consistent emotion classification from short audio clips
- Mapping subjective emotions to objective lighting parameters
- Debugging Three.js lighting (normals, materials, shadows)
- Syncing real‑time updates between backend and 3D frontend
- Designing an experience that felt empathetic instead of gimmicky
- Balancing ambition with what we could realistically build (we originally planned to include computer vision)
- Dependency and environment issues during integration
What we learned
- How to process and analyze voice emotion using AI models
- How to build a multimodal pipeline that handles audio, text, and 3D rendering
- How to map abstract emotional states to concrete environmental parameters
- How to integrate FastAPI, Gemini, ElevenLabs, and Three.js into a cohesive system
- How to collaborate quickly under time pressure and iterate on a creative idea
What's next for MoodNest
- Integrating real IoT devices (smart bulbs, thermostats, speakers, blinds)
- Expanding emotion categories beyond four basic presets
- Creating a full emotional profile that adapts over time
- Adding biometric inputs from wearables
- Turning MoodNest into a real home‑automation layer powered by AI
Built With
- api
- elevenlabs
- gemini
- javascript
- node.js
- python
- tailwind
- three.js
Log in or sign up for Devpost to join the conversation.