Inspiration

Our project originated from two separate ideas: the first, a hivemind-like AI that shares data between several different endpoints, and the second, a way to efficiently play a game of hide-and-seek. We realized that we could merge these ideas into an application for good, facilitating search parties for missing children and search and rescue operations, by adding a few specialized features and some upscaling. From there, we continued to brainstorm ideas until we had a functioning foundation for the application we envisioned.

What it does

Our application allows multiple users to connect to a single database and upload a map of the area they plan on searching. It then breaks the map into sectors (which we call "quadrants") and, considering the number of people in the search party, gives each person individual instructions on how to search the quadrants for their target. Users can indicate that they are moving to a different quadrant or leave voice notes about the quadrant they are currently searching. Once they are done searching a quadrant, the AI gives them instructions on which quadrant to search next, based on the current state of the other users in the search party.

How we built it

For the frontend of our project, we used React and React Native to build a GUI on the Expo Go mobile app. The majority of the frontend is done in JavaScript. It has simple interfaces for user input, which is sent to the backend via Vercel and Supabase. It also checks for updates from the backend every second and plays a corresponding audio file to all users whenever there is an update from the backend.

The backend is a next.js server hosted on vercel using Supabase as a backend. It exposes REST endpoints to the frontend to allow it to send and receive audio, updates, and statuses from the ongoing search.

Challenges we ran into

The entire frontend team did not know any JavaScript going into this hackathon, which made the first few hours extremely difficult for laying the groundwork for our project and prevented us from gaining any momentum in the beginning. Additionally, only one member of our team had any experience in backend development and development with React Native, making his availability a huge limiting factor in our progress (as he was the only one who could help debug efficiently). For this reason, many of our originally intended features, including audio input for moving between quadrants, did not have the time to be realized and ultimately had to be scrapped. Much of the competition turned into a tough act of making quick decisions on which features we would have time to debug and when it would be better to give up on a feature to get a product out.

Accomplishments and what we've learned

Since most of us came here to learn, our main accomplishments lie in the opportunities we've had to learn in the past two days. We have learned several programming languages, integrated APIs into our code, debugged using LLMs, learned some graphic design, and made a mobile app for the first time. Considering that one member of our team learned what frontend development meant the day before the competition, the fact that we got a working application is a great accomplishment in itself.

What's next for SPAI

Since we started from scratch and had limited time, we couldn't implement many features we had planned. In future versions, we would upscale the application to be much larger than what we were able to produce for this demo, use GPS instead of a static image of a map to make navigation easier, implement support for more than one party using our app, and implement a mechanism to send messages to individual users (rather than the whole search party). We would also add smaller improvements, like a button to call for help and communicate with particular members of the party. We would also, of course, give all of our current features a deeper realization, since we only had time to make them functional.

Share this project:

Updates