Inspiration

Imagine waking up one day and realizing you can no longer smile. This is the reality for over 225,000 people in the United States who are affected by facial paralysis each year—a staggering $2.53 billion Total Addressable Market (TAM) dedicated to helping these patients.

Beyond this, over 5 million people undergo facial procedures annually, demonstrating a broader need for therapeutic and rehabilitative tools that enhance both facial functionality and confidence.

What it does

SmileSync is a groundbreaking XR therapy platform that helps users regain or enhance their ability to smile through guided, interactive exercises.

  • Step 1: Users are welcomed into an engaging XR therapy session by a conversational avatar.

Alt text

  • Step 2: Guided smiling exercises begin, with real-time feedback from the avatar, powered by precise facial tracking.

Alt text

  • Step 3: Users complete their session and receive rewards to encourage consistency and celebrate progress.

How we built it

We used Unity to develop this immersive experience, integrating the following SDKs:

  • NPC AI Engine SDK and Ready Player Me SDK for the conversational avatars, ensuring human-like interactivity.

  • Meta Movement SDK for real-time facial expression tracking, enabling seamless feedback and expression mirroring.

  • Building Blocks SDK for camera rigging and hand-tracking to enhance user interaction and accessibility.

Challenges we ran into

Every great innovation comes with its hurdles, and SmileSync was no exception:

  • Ensuring SDK compatibility across Quest Pro and Quest 3.
  • Optimizing real-time facial tracking while maintaining low latency.
  • Achieving seamless integration of multimodal inputs without disrupting the immersive experience.

Accomplishments that we're proud of

LIVE DEMO!

Come try it in Table #24

Successfully created a realistic and engaging therapy experience using XR avatars.

Integrated cutting-edge facial tracking technology to mirror user expressions with precision, enhancing therapeutic outcomes.

What we learned

  • Voice-to-3D integrations: Utilizing technologies like Meshy to transform voice inputs into 3D shapes or animations for a customizable therapy session.

  • AI-driven personalization: Employing advanced multimodal AI models to tailor therapy sessions based on user progress, emotional state, and preferences.

What's next for SmileSync

  • Multimodal features for environment understanding: Spatial awareness can create highly adaptive and personalized therapy environments, responding dynamically to user movements and surroundings.

Built With

Share this project:

Updates