Inspiration
We were inspired by our overall interests in bowling. Many club social activities are held at Irvine lanes, a local bowling alley. We didn’t want to be shown up at our club socials. While trying to practice our form, we thought it’d be better to put that in the hands of a computer. Thanks to Qualcomm, we thought that we would develop an AI model that would help us ensure that we would never be behind again.
What it does
This project is a smart bowling form analyzer that uses an Arduino Modulino Movement sensor in order to track the roll, pitch, and yaw of a person’s wrist during a bowling shot. Then, by implementing a reinforcement learning-based machine learning model, the wrist data gets calibrated to predict what angle is most optimal to throw a strike on the lanes.
Our Arduino continuously streams pitch, roll, and yaw readings over serial at 115200 baud. The main function, main.py, sits in a loop reading those values as the arm moves through the motion of throwing a bowling ball, parsing out the three values at a frequency of 2 Hz. Each shot then gets passed to our shot_predictor.py function to be labeled as a MAKE or a MISS, where a MAKE is considered hitting all 10 pins.
Our model runs a Bayesian Neural Network for predictions, but falls back to a hardcoded formula (reasonable angle thresholds) whenever the BNN’s uncertainty is too high (above 0.45). As more real data accumulates and the model continues to train, it gradually takes over from the formula. Then, every 10 seconds, our run_analysis() function runs, fine-tuning a Bayesian Neural Network on all collected shot data to output a probability distribution to get uncertainty estimates alongside predictions. To plot these probabilities, our model generates a 3D scatter plot of the full probability landscape (roll x pitch x yaw) by P(make) with the optimal point starred.
After the plot is generated, our model function shaqmodel.py encodes the plot and sends it, along with the optimal pitch, roll, yaw, and P(make) scores, to the Flask API. Flask then decodes the image and stores the angle values in SQLite. The frontend can then display both the live graph and the recommended form adjustments.
Over time, our model adapts to a person’s specific bowling style and the environment he/she is bowling in with its RL component, allowing it to provide personalized feedback rather than generic tips.
How we built it
We broke the project into three parts: Arduino logic, model logic, and frontend logic. The Arduino board provides the data collection and cleaning. By using a Qwiic connector to connect the Modulino Movement module to our Arduino board, we were able to collect real-time data that would actually be used to train our model. The second part is the model itself. The model utilizes a scaled down version of a Bayesian Neural Network, which we built using PyTorch and Gymnasium to implement reinforcement learning. This not only trains the model on new data, but also puts out a new prediction for the most optimal angle. Then, the model makes a request to a Flask frontend. This Flask frontend displays the most optimal angle, as well as tracks previous data.
Challenges we ran into
The Arduino UNO Q was a major challenge for us that we faced. Due to it being a relatively new technology, the UNO Q was something we were not extremely familiar with, introducing a steep learning curve. Its unique syntax in its associated IDE (the Arduino App Lab) also caused us to have multiple different issues when running our program. When we tried to run our code, the IDE was quite slow and would often freeze on us, forcing us to quit the window and reboot the IDE multiple times during the testing process. Lastly, due to a lack of other hardware components, we did lose out on the opportunity to potentially render higher fidelity data, which would have helped with the accuracy of our model.
Accomplishments that we're proud of
We are proud of being able to pick up a new piece of hardware and still being able to develop a tool with it. We are happy with our perseverance and ability to adapt to changes that occurred with the new technology. We are also proud of being able to take multiple different approaches and fuse them in order to create one larger project that can perform a more complex project.
What we learned
We learned how to use Arduinos and the logic behind using the new Arduino bridge in order to transfer data between an Arduino processor and a Linux machine. We also learned a lot about implementing a machine learning algorithm into the actual programming architecture. Lastly, we learned about using multiple different programmatic architectural ideas (REST APIs, Arduino, and ML) and fusing them in order to actually make one larger project.
What's next for Strike!
The plan is to have more materials at a higher fidelity and use more sensors. These could be better used to map out the arm and grasp more of the nuance of the motion during an actual bowl. This would also update our AI model, and we’d use our updated AI model—and probably an updated UI—in order to display more useful data and allow for more user interaction.
Log in or sign up for Devpost to join the conversation.