Inspiration

We were inspired by the idea that human movement is an untapped interface. Most digital systems rely on touchscreens, keyboards, or controllers. But our bodies already produce rich, real-time data. With advances in computer vision and AI, we wanted to explore what happens when gestures become a universal control layer.

As neuroscience and engineering students, we were especially interested in how motion could be translated into meaningful digital interaction as a scalable, touchless control system that could extend into creative technology, accessibility, and healthcare.

We built VIBE MAXXING to demonstrate that vision-based AI can turn natural movement into intelligent interaction.

What it does

VIBE MAXXING is a real-time AI-powered gesture recognition system.

Using computer vision, the system tracks hand landmarks through a webcam and converts motion data into dynamic control signals. These signals are transmitted live to a visual engine, where gestures directly manipulate generative particle systems and interactive visuals.

Instead of clicking or typing, users control digital environments using hand movement, pinch strength, motion speed, and spatial position.

While we demonstrated an artistic application, the underlying system is modular and can be integrated into:

  • Touchless control interfaces
  • Interactive installations
  • Accessibility tools
  • Healthcare or rehabilitation systems
  • Immersive creative technology platforms

How we built it

We built VIBE MAXXING using:

  • Python
  • OpenCV for real-time video capture
  • MediaPipe for AI-based hand landmark detection
  • OSC (Open Sound Control) for real-time signal transmission
  • TouchDesigner for generative visual rendering

Pipeline:

  1. Webcam captures live video.
  2. MediaPipe processes frames and extracts 21 hand landmarks.
  3. Landmark coordinates are normalized and mapped to control parameters.
  4. Gesture intensity (like pinch distance and motion velocity) is mathematically scaled and amplified.
  5. Data is sent via OSC to TouchDesigner.
  6. Visual particle systems respond instantly to gesture input.

Challenges we ran into

The biggest challenges were:

  • Stabilizing gesture signal noise
  • Scaling small motion ranges into meaningful visual effects
  • Managing real-time OSC communication without lag
  • Tuning sensitivity so pinch and motion felt natural instead of abrupt

What we learned

We learned that real-time AI systems are less about the model and more about signal engineering.

Hand tracking is only the first step. The real innovation is:

  • Signal normalization
  • Parameter mapping
  • Sensitivity tuning
  • Designing for natural interaction

What's next for VIBE MAXXING

Next steps include:

  • Expanding beyond single-hand tracking
  • Adding gesture classification models (beyond simple positional mapping)
  • Integrating adaptive AI that learns user-specific motion patterns
  • Exploring healthcare and rehabilitation applications
  • Packaging the system into a deployable SDK

Our long-term vision is to build scalable, AI-driven motion interfaces that redefine how humans interact with digital systems.

Built With

  • computer-vision
  • mediapipe
  • python
  • touch-designer
Share this project:

Updates