Inspiration
January 2025 was a chaotic and destructive beginning to the year as the Palisades Fire burned through LA. One of our team members' sisters had to put their education on halt and evacuate from the UCLA Campus to get out of harm's way. Furthermore, many of our friends lost their homes, finances, and families, which is a story seen throughout the city. Implementing earlier countermeasures would have been crucial and effective, and our team at ARDOF(Automated Response for Detecting and Observing Fire) believes that Artificial Intelligence is the key.
What it does
Think of ARDOF as a web implementation of software that we hope to integrate into currently-built cameras/sensor infrastructure. To begin, we expect this product to be effective in rural areas where there may not be heavy enough traffic to report the early stages of a wildfire before things get out of hand. On our website, the user inputs the initial image to begin the application. Still, in actual use cases, we expect existing cameras to take pictures every 30 Seconds to 1 Minute and get run through 2 Artificial Intelligence models.
1: The Image is run through an ML Image Classifier to flag whether a Fire is in the image 2: If flagged, the image is run through an LLM Model to extract key information such as location, spread status, risk level, size of fire, smoke height, stage of fire, and flammable materials.
Immediately after a Fire is flagged, initial info is sent to the nearest emergency service provider over a phone call(Most likely 911 or Fire Department). Every 30 seconds after that, new information is sent out over SMS to keep responders updated on the information. Our website demos this by updating the image as the fire cycles through many stages.
How we built it
The ML model for initial fire detection was trained on Google Colab with A100 runtime. The dataset consisted of 10,000 images that we split for efficient training. We trained a MobileNetV2 model on this dataset due to its capability in prominent feature extraction while staying efficient.
The Backend was written in Flask, Twillio for calling and messaging, and OpenAI GPT-4o for feature extraction of the model.
The front end was written in HTML, CSS, and JQuery for optimal styling and easy functionality.
Challenges we ran into
Our ML Model faced many challenges due to the limited time we had to fine-tune, but we were successfully able to reach a high accuracy.
Our team initially began with Palantir for the LLM Feature Extraction and was successfully able to create data pipelines, functions, and actions, but we were unable to properly integrate this into our actual Flask Application. After hours of trying, we decided to take an alternative route.
Accomplishments that we're proud of
96% Accuracy on the Fire Dataset with MobileNetV2 Finished the software end-to-end Integration with multiple software that is new to us(Twillio, OpenAI, Flask)
What we learned
Our team learned a lot more about Palantir AIP, AIP Logic, and many more of their tools. We learned how to use Twilio for communication software and OpenAI's API for LLM integration.
What's next for ARDOF
We plan to implement our backend software into real-world scenarios such as in cameras and sensors across the world. Moving on from the test/demos of the product and truly helping mitigate the effects of wild fires is ARDOF's major goal.
Log in or sign up for Devpost to join the conversation.