Inspiration
What inspired us as a team is mainly the foreseeable impact of what our project could actually do. Bad posture is an extremely common habit, especially amongst university students when they're typing away at a computer all day. We wanted to create something that would help these students, and we think that health is a number one priority. Our teammate Alex’s experience with scoliosis made this challenge even more personal. We believe that fixing posture is a small change with a huge, measurable impact.
What it does
PosturePal uses a machine learning model to place nodes on your body (ears, neck and shoulders) with your camera. From those points, it performs calculations to know when you're slouching and provides live feedback to the user, as well as an overall posture score for the session.
How we built it
We built PosturePal using Python OpenCV and MediaPipe for real-time pose detection. The system analyzes side-profile camera input to recognize slouching and provide posture tracking. We also implemented a timer and scoring feature so users can measure posture quality during each session.
Making the Data Usable (Math Behind PosturePal)
To transform raw camera input into meaningful feedback, we applied several mathematical steps:
- Coordinate Extraction: Using MediaPipe, we obtained (x, y) coordinates of key body landmarks (shoulders, ears, hips).
- Vector Calculations: We formed vectors (e.g., ear–shoulder, shoulder–hip) to represent posture angles.
- Thresholding: If the measured angle exceeded a set threshold (e.g., more than 15° forward tilt), it was flagged as slouching.
- Time Integration: We interpreted live feedback and performed calculations, as well as summed the amount of time slouching vs. sitting upright to calculate a posture score over the session.
Challenges we ran into
- Ensuring accurate posture detection while keeping the app responsive.
- Handling real-time video input smoothly with MediaPipe and OpenCV.
- Frontend integration with React JS and FastAPI was not fully completed within the hackathon timeframe, so the working demo runs entirely through the Python backend.
Accomplishments that we're proud of
One of the best parts about the project was learning to communicate and collaborate with our other teammates. We went into the hackathon not knowing how to use github, and we finished with an published entire repo of computer vision and data analysis code. This wouldn't have been possible at all without our complete effort to understand and communicate with eachother. We learned about specific problems with each our roles, whether that means code or system design, and from there we would help eachother debug whatever needed to be fixed. Our individual experience in certain categories like video editing, math analysis, CV initialization, meant that we could teach eachother things, ultimatley leading to the creation of the project.
What we learned
Once again, what we thought was the most important part was learning how to work with eachother under a high pressure environment. In moments where everyone is frustrated and pressured by time, you need to have the your best empathy and understanding for your teammates perspective, so that you can help them as quickly as possible. For example, when I (Ethan) was struggling to implement the "midslouch" parameters with the appropriate thresholds for detections, Will would take his time to understand what I was saying, identify the problem, and from there we could actually debug our code together. Of course this came with trial and error as we initially started off by arguing with eachother, but as the hackathon progressed, we learned how to communicate our ideas to eachother in a professional manner. And finally, we also learned how to integrate and use github in our code entirely from knowing nothing, and also how to initialize our computer vision code with our physical camera, and how to develop mathematical formulas to interpret our actual webcam activity (back posture) using landmark areas.
What's next for PosturePal
PosturePal is something that we are genuinely passion about because we believe that its impact can directly help college students, which of course is what we are. In the future, we would like to fully deploy PosturePal into a mobile app, so that you can use your phone's camera to monitor your posture, making this type of technology accessible to larger audiences. Also, we are going to build more advanced models that can detect and specify unique types of unhealthy/healthy posture. With that unique info we are planning to have some sort of LLM actively give speciifc and personalized feedback of what the user specifically did wrong, and the best course of action to take to fix that behavior.




Log in or sign up for Devpost to join the conversation.