Inspiration

When researching solutions to improve first responders workflow, we noticed that agents still manually type information. We saw this as a weak spot that wastes time and a way for operators to shift their focus towards more complex emergencies.

What it does

Our website has two parts. First, it will take in audio from the user and sort out information about the type of emergency, the location, condition, and time of emergency. Additionally, users can upload images for operators to better understand the situation and provide critical information to first responders.

How we built it

To built our scripts, we used Python and a variety of APIs. Deepgram would take in audio and transcribe all of it to text. After this, Google's Gemini 1.5 flash will extract the critical information through our prompt and update the variables every ~2 seconds. Flask was used to help route our Python methods to specific URLs to help define the user experience. For our front end, we used HTML, CSS, and Java Script for the design of the site. We did use AI to help with our coding.

Challenges we ran into

We had some issues with building the website as we weren't too experienced initially. We also had trouble with prompting Gemini so that would produce an accurate response when the user is talking.

Accomplishments that we're proud of

Having the website mostly functional and learning how to use Flask/Java Script to build our frontend/backend.

What we learned

We learned how to connect the front and back end and also the use of APIs to receive and present data to the user.

What's next for First Responder Hub

Improve UI/UX design and able to offer specific response plans based off the emergency type, location, and time of emergency.

Share this project:

Updates