Inspiration

Have you ever wished you could ensure your movements are executed correctly and align perfectly? Whether it's performing an exercise with impeccable form, stretching effectively for physical therapy, or executing the sharpest moves on the dance floor, we all want to shine in the spotlight. We're driven by the transformative potential of precise movement, recognizing its impact on physical health and artistic expression. By equipping individuals with tools for refinement, we aim to foster confidence, self-expression, and overall well-being. Our mission is to democratize access to movement education, enabling everyone to embrace the joy of fluid, precise movement.

In our journey, we strive to break down barriers, making sure that cutting-edge movement analysis is not just a luxury for the few. With our platform, even those without immediate access to gyms, dance studios, or rehabilitation centers can enhance their movements and technique. We want to bring the dance studio and the therapist's expertise into every living room, ensuring inclusivity and the opportunity for personal growth. We envision a world where every step, stretch, and stance leads you closer to perfection, whether on TikTok's virtual stage, in a yoga studio, or the comfort of your home.

What it does

We created an interactive web tool that computationally assesses user movements and scores how similar they are to a reference video (e.g. a video of a person dancing, or doing an exercise, etc). Users can then either upload a video of themselves or use our live webcam feature to assess their movements compared to those of expert physical therapists, fitness instructors, or dancers for any music. These templates cover a wide range of activities from dance to fitness routines to therapeutic exercises.

Users are provided with a graph showing their performance and how closely they match the reference video over time. Our system not only matches users' movements to these standards but also provides areas of improvement and personalized recommendations to help users refine their techniques. Our live webcam feature even allows users to follow the reference video in real-time and receive feedback.

How we built it

  • React & Next.js for the frontend
  • Node.js for the backend
  • TensorFlow (MoveNet model) for pose estimation
  • Dynamic Time Wrapping algorithm for scoring.

Our application leverages a React & Next.js front-end, providing a user-friendly interface that's hosted on Vercel for effortless deployment and scalability. The backbone of our back end is Node.js, which manages data processing and integrates seamlessly with our core technologies for movement analysis.

A critical component of our system is TensorFlow, specifically utilized for our deployment of the MoveNet pose estimation model. This model enables us to accurately detect and track keypoints of the user's movements in real-time, forming the foundation of our interactive feedback system. Once poses are estimated, we use this data to dynamically render key points on the user interface, visually guiding users to align their movements with predefined expert patterns.

Further refining our analysis, we employ Dynamic Time Warping, an algorithm for aligning temporal sequences and comparing their similarity. This allows us to measure how closely a user's movements match with our expert templates. By quantifying movement accuracy, we can provide users with specific performance scores and insights, highlighting areas of excellence and those needing improvement. We also use sliding window techniques to identify these areas of the user's movement that are misaligned with the reference video.

Challenges we ran into

We encountered challenges in real-time pose tracking, addressing latency issues when translating Python models to JavaScript, and implementing the Dynamic Time Warping (DTW) algorithm. For example an initial problem we faced was: in our React app, when the user uploads a video and we want to extract the poses from the video, each frame would only be fetched when the video was playing in the background. Thus, extracting poses from a 40 second video would take 40 seconds. We found a solution to process frames quickly, and only focus on the specific frames that we were interested in (doing pose detection at 5 fps). We also had to improve the efficiency of our frame processing code and use Promises to make sure that 2 videos could have their poses extracted at the same time, reducing the amount of delay before getting back results.

There were also issues with state variables, and normalizing/calculating user keypoint coordinates and scores.

Accomplishments that we're proud of

We are proud of achieving full-scale posture tracking and integrating advanced backend algorithms. Our success in building this sophisticated platform within 24 hours stands as a testament to our commitment and the power of collaboration.

What we learned

Throughout the development process, we've gained a deeper understanding of posture tracking capabilities using Tensorflow, optimized algorithms for real-time data processing (particularly from a live webcam), parallelizing inferencing tasks, and harnessing the power of webcams for capturing and analyzing user movements.

What's next for YGroove

Looking ahead, we plan to expand our library of movements, integrate with more comprehensive health data, and explore machine learning models that can predict and correct movement patterns. Our goal is to become the go-to platform for anyone looking to perfect their movements, whether for health, fitness, or the sheer joy of dancing.

Additionally, we'd like to build a social aspect to the app where users can easily share a "challenge" link of themself following the movements of a reference video (e.g. a new dance or exercise routine) and other people can click on the link, and take their own shot at performing the movements, with a leaderboard showing the top users who matched most closely to the reference. Thereby, this ignites friendly competition and encourages people to engage socially and physically.

Built With

  • dynamic-time-warping
  • next.js
  • node.js
  • pose-detection
  • react.js
  • tensorflow
Share this project:

Updates