Inspiration

Drowning is the second leading cause of death for children ages 5-14. Often these children drown in public and home pools where a guardian is watching over them. Distractions arise and the eyes of a watchful parent can be quickly drawn away from their child. We aim to provide a set of eyes that are always watching out for the safety of your children in the pool.

What it does

Aqua Sentry uses an object detection model, trained on a dataset of drowning people, to identify signs of drowning. Whether it be gasping for air, thrashing in the water, or going under Aqua Sentry will see it. In the case Aqua Sentry does identify a drowning victim, it will play a loud alert to draw the attention of those in proximity. Aqua Sentry also alerts emergency services and provides information on where the emergency is at. This is done using NFC tags which store the location of the pool. The NFC tags also allow for our app to be easily accessible at any pool.

How we built it

The app is built using the Next.js framework. The frontend is made using Typescript, Tailwind, and several component libraries. The backend is a Flask app that calls the Roboflow API where our model is hosted. The model was trained using the Roboflow Universe and is a Roboflow 3.0 Object Detection (Fast) model.

Challenges we ran into

We were greatly challenged by the training and implementation of our object detection model. First, we had to find a dataset that contained drowning victims. From there we trained Yolov11, Yolov8, and YoloV5 models. These however had compatibility issues with our frontend and had to be scrapped. We eventually landed on Roboflow's Object Detection model as we could train and host our model as an API we could call.

Accomplishments that we're proud of

We are very proud of our integration of NFC tags in our project. This allows our app to not have an accessibility barrier. It was a challenge to figure out how to store and utilize location data on the NFC tag, but once we figured it out, communicating with emergency services became much easier.

What we learned

We were able to expand our knowledge heavily on object detection models. While we have used them before, it is important to choose the right model for the given situation and technology. Additionally, for one of our members this was their first project outside of their coursework. So they were able to learn a lot about app development and the relationship between the front and backend.

What's next for Aqua Sentry

Aqua Sentry hopes to improve on its object detection model by training it with further data. We also hope to find a way to mitigate false positives that could waste first responder's time. Lastly, we would like to provide an information section in the app that could have guides on life saving techniques like CPR for those on the scene.

Built With

Share this project:

Updates