Inspiration
We noticed two problems, people struggle to stay motivated working out at home, and many want to contribute to causes they care about but do not know how. What if every rep you do could plant a tree? What if your squats could clean the ocean? Motion4Good was born from the idea that fitness should have purpose, combining personal health goals with real-world environmental and social impact.
What it does
Motion4Good is a webcam-powered fitness platform that turns your home workouts into real charitable contributions. Users join community challenges focused on causes like reforestation, ocean cleanup, or charity donations. Using just your webcam, our AI tracks exercises in real time (jumping jacks, squats, bicep curls, etc.), and every rep you complete contributes to shared challenge goals.
The platform features an AI fitness coach powered by Cerebras that provides personalized workout advice, form feedback, and motivation, complete with voice responses. The coach remembers your fitness goals, medical history, and past conversations using RAG (Retrieval-Augmented Generation) to give truly personalized guidance.
How we built it
Frontend: React + TypeScript with shadcn/ui for a beautiful, accessible interface
Computer Vision: MediaPipe Pose + OpenCV for real-time exercise detection. We built custom detection algorithms for 7+ exercises using joint angle calculations and visibility checks. Each exercise has carefully tuned angle thresholds (e.g., lateral raises required multiple iterations to get hip-shoulder-wrist angles right).
AI Coach:
- Cerebras qwen-3-32b reasoning model for intelligent responses
- Moorcheh RAG SDK to retrieve relevant context from chat history and medical info
- ElevenLabs TTS (Bill's voice) for spoken responses
- Custom prompt engineering to handle tool calling for challenge data
Backend: Python Flask API with MongoDB for user data, challenges, and rep tracking
Key Technical Challenges:
- Built state machines for rep detection with hysteresis to avoid false positives
- Implemented visibility checks so detection pauses when body parts leave frame
- Cleaned reasoning tags () from AI responses
- Increased token limits for reasoning models that generate internal thoughts plus responses
- Forced AI to use real MongoDB ObjectIDs instead of making up fake challenge links
Challenges we ran into
Exercise Detection: Lateral raise detection was incredibly tricky. Initially it triggered when arms went overhead (10-30°), then again when lowering. We added an overhead threshold to reset state and prevent false counts. Required multiple tuning sessions with debug logging to get angles right (ended up with UP=80°, DOWN=45°, OVERHEAD=145°).
Reasoning Model Woes: The Cerebras reasoning models (first zai-glm-4.6, then qwen-3-32b) were showing internal reasoning tags to users and cutting off mid-sentence. We had to:
- Write regex to strip reasoning tags, including unclosed tags
- Increase max_tokens from 300 to 800 to fit reasoning plus response
- Add fallback logic to use reasoning field if content was empty
RAG Context Not Working: The coach was retrieving context from Moorcheh but never actually adding it to the messages array sent to the AI, so it had no memory. Fixed by injecting RAG context as a system message.
AI Hallucinating Links: The coach kept making up fake challenge IDs like /challenges/9 instead of using real MongoDB ObjectIDs. We had to add explicit examples, format the tool output with IDs prominently at the top, and add multiple warnings in the system prompt.
Accomplishments that we're proud of
- Built 7+ exercise detection algorithms from scratch using MediaPipe landmarks
- RAG-powered personalized coaching that remembers user context across sessions
- Voice-enabled AI coach with natural-sounding responses
- Real-time pose tracking with smart visibility handling and form feedback
- Persistent challenge system where user contributions are saved and tracked
- Accessibility features including color blind mode and audio cues
- Successfully debugged and tuned complex AI systems (reasoning models, RAG, TTS) to work together seamlessly
What we learned
- Computer vision is hard, tuning angle thresholds requires real-world testing with actual users doing exercises
- Reasoning models need special handling, they generate internal thoughts that require higher token limits and careful parsing
- RAG is powerful but subtle, context retrieval only helps if you actually inject it into the AI’s messages
- AI needs extremely explicit instructions, even with examples, models will hallucinate data
- User feedback is critical, the lateral raise detection went through 5+ iterations based on real angle measurements
What's next for Motion4Good
- Live form feedback, complete the real-time coaching system that analyzes form every 5 seconds and provides audio corrections
- More exercises, add push-ups, planks, lunges, and other popular exercises
- Mobile app, native iOS and Android apps with better camera handling
- Social features, friend challenges, leaderboards, and progress sharing
- Actual charity integrations, partner with One Tree Planted, Ocean Conservancy, and other organizations to make real donations
- Adaptive AI coaching, use workout history to automatically suggest progressive overload and personalized training plans
- Group video workouts, live sessions where users can work out together while contributing to shared goals
Log in or sign up for Devpost to join the conversation.