Inspiration
Communication goes beyond words—over 55% of emotional expression comes from facial cues. But what if you want to express emotions without showing your face? For individuals concerned about privacy, those with social anxiety, or anyone who wants to communicate emotionally without being on camera, current solutions fall short.
We asked ourselves: What if you could share your emotional state in real-time while keeping your face completely private?
Avatar - FaceFree bridges this gap—capturing your emotions through your webcam but only ever transmitting an abstract avatar representation. Your face never leaves your device; only the emotion does.
What it does
Avatar - FaceFree is a privacy-first emotion communication system that:
- Captures your face locally via webcam using computer vision
- Analyzes your emotion on-device using DeepFace AI (detecting happy, sad, angry, surprised, neutral, fear, disgust)
- Transmits only the emotion label (never your face) to a bridge server
- Displays a matching avatar on a T5AI embedded display in real-time
The avatar responds dynamically—smile and it smiles back, frown and it shows concern. Your facial data stays on your computer; the world only sees the avatar.
Privacy-First Design
- ✅ Face never leaves your device - Only emotion labels are transmitted
- ✅ No cloud processing - All face analysis happens locally
- ✅ No recordings - Video frames are processed and immediately discarded
- ✅ Minimal data transmission - Just "happy:85" style strings over the network
Use Cases
- Private Video Calls - Share emotions without showing your face
- Anonymous Support Groups - Express feelings while maintaining anonymity
- Therapy Sessions - Emotional feedback without the pressure of being on camera
- Gaming & Streaming - React to content without face reveal
- Workplace Communication - Emotional presence without appearance concerns
How we built it
System Architecture
┌──────────────────────────────────────────────────────┐
│ YOUR DEVICE │
│ ┌────────────┐ ┌────────────┐ │
│ │ Webcam │───►│ DeepFace │ │
│ │ │ │ (Local) │ │
│ └────────────┘ └─────┬──────┘ │
│ │ │
│ ONLY EMOTION LEAVES YOUR DEVICE │
└──────────────────────────┼──────────────────────────┘
│ POST "happy:85"
▼
┌──────────────────┐
│ Bridge Server │
│ (Flask :5000) │
└────────┬─────────┘
│ GET /listener/emotion
▼
┌──────────────────┐
│ T5AI Display │
│ ┌──────────┐ │
│ │ 😊 🙁 😠 │ │
│ │ Avatar │ │
│ └──────────┘ │
└──────────────────┘
Hardware
- Tuya T5AI Development Board - ARM Cortex-M33 MCU @ 320MHz
- 320x480 LCD Display - For rendering expressive avatar faces
- WiFi Module - Real-time server communication over local network
Software Components
1. Privacy-Preserving Emotion Client (Python)
from deepface import DeepFace
import cv2
import requests
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
# Face is analyzed LOCALLY - never transmitted
result = DeepFace.analyze(frame, actions=['emotion'])
emotion = result[0]['dominant_emotion']
confidence = result[0]['emotion'][emotion]
# Only the emotion label is sent - face stays private
requests.post(f"http://server:5000/speaker/emotion_text",
json={"emotion": emotion, "confidence": confidence})
# Frame is immediately discarded - never stored
del frame
2. Bridge Server (Flask)
@app.route('/speaker/emotion_text', methods=['POST'])
def speaker_post_emotion():
"""Receives emotion from camera client (no face data)"""
data = request.get_json()
current_emotion = data.get('emotion', 'neutral')
current_confidence = int(data.get('confidence', 50))
return jsonify({"status": "ok"})
@app.route('/listener/emotion', methods=['GET'])
def listener_get_emotion():
"""Returns current emotion for display boards"""
return f"{current_emotion}:{current_confidence}", 200
3. Embedded Display Client (C/TuyaOS)
static void http_update_thread(void *arg)
{
while (1) {
GW_WIFI_STAT_E wf_stat = get_wf_status();
if (wf_stat >= 4) { // WiFi connected
update_emotion_from_server(); // GET /listener/emotion
}
tal_system_sleep(1000); // Poll every second
}
}
4. Procedural Avatar Rendering
- Memory-efficient - No bitmap storage, faces drawn algorithmically
- 6 emotion states - Happy, Sad, Angry, Surprised, Neutral, Confused
- Double-buffered display - Smooth, flicker-free updates
- Expressive features - Dynamic eyes, mouth, and eyebrows
Technical Highlights
| Component | Technology | Privacy Benefit |
|---|---|---|
| Face Detection | DeepFace (local) | Face never leaves device |
| Data Transmitted | Emotion label only | ~20 bytes per update |
| Server | Stateless Flask | No data storage |
| Display | Procedural rendering | No face reconstruction possible |
Challenges we ran into
1. Privacy-Performance Balance
DeepFace provides accurate local emotion detection but requires computational resources. We optimized:
- Frame processing rate (1 FPS is sufficient for emotions)
- Model selection (lightweight models for faster inference)
- Memory management (immediate frame disposal)
2. Cross-Compilation Complexity
Building for ARM Cortex-M33 on macOS required:
- Custom toolchain configuration (arm-none-eabi-gcc)
- Resolving linker issues with TuyaOS peripheral libraries
- Managing include paths across multiple subsystems
3. Embedded Memory Constraints
The T5AI has limited RAM, so we:
- Used procedural face rendering instead of bitmap images
- Implemented efficient double-buffering
- Optimized HTTP response parsing to minimize allocations
4. Network Reliability
Ensuring stable emotion updates required:
- Robust WiFi reconnection handling
- Timeout management for HTTP requests
- Graceful degradation when server is unreachable
Accomplishments that we're proud of
- ✅ True privacy preservation - Face data never leaves the source device
- ✅ End-to-end working pipeline - Camera → Local AI → Server → Display
- ✅ Procedural avatar system - Memory-efficient, expressive face rendering
- ✅ Clean REST API - Simple, auditable data flow
- ✅ Web testing interface - Debug without camera hardware
- ✅ Sub-second latency - Near real-time emotional feedback
What we learned
- Privacy-by-design architecture - Building systems where privacy is inherent, not added
- Edge AI deployment - Running ML models locally for privacy preservation
- Embedded systems development - TuyaOS framework, ARM toolchains, display drivers
- Real-time system design - Balancing latency, reliability, and resource constraints
- Cross-platform development - Bridging Python, C, and web technologies
What's next for Avatar - FaceFree
Privacy Enhancements
- 🔐 End-to-end encryption - Encrypt emotion labels in transit
- 🧅 Tor integration - Anonymous emotion transmission
- 📱 Fully offline mode - Direct device-to-device communication
Feature Expansion
- 🎤 Voice emotion detection - Analyze tone without recording words
- 🎨 Custom avatars - Personalized avatar appearances
- 📊 Local emotion journaling - Track your emotions privately on-device
- 🔗 Multi-device sync - Multiple displays for group settings
Platform Growth
- 🖥️ Desktop app - Standalone application for Windows/Mac/Linux
- 📱 Mobile companion - iOS/Android apps
- 🎮 Game integration - SDK for streamers and content creators
- 💼 Enterprise API - Privacy-compliant emotion analytics
Built With
Hardware
- Tuya T5AI Development Board
- ARM Cortex-M33 MCU
- 320x480 LCD Display
Software
- Languages: C, Python, JavaScript, HTML/CSS
- Frameworks: TuyaOS, Flask, OpenCV
- AI/ML: DeepFace (local facial emotion recognition)
- Build Tools: CMake, Ninja, arm-none-eabi-gcc
- Protocols: HTTP REST API, WiFi
Privacy Technologies
- Local-only face processing
- Minimal data transmission
- Stateless server architecture
Log in or sign up for Devpost to join the conversation.