Inspiration
Rescura was inspired by the urgent need for accessible, context-aware first aid in high-stakes environments where medical professionals may not be immediately available—such as in wilderness areas or during natural disasters. Existing solutions often lack adaptability, real-time data integration, or support for multi-modal input. We wanted to build a system that could assess medical emergencies, provide accurate treatment guidance, and integrate live alerts—all powered by a collaborative team of AI agents specialized in emergency response.
⸻
What it does
Rescura is a multi-agent AI system that delivers real-time first aid guidance through five specialized agents: Triage, Treatment, Prevention, Follow-up, and Resource. The system supports text, voice, and image input, making it flexible in emergency situations. It integrates real-time natural disaster alerts (via NWS and FEMA APIs) and Amber Alerts, providing localized safety notifications. Rescura is also tuned for wilderness first aid and disaster response, adapting its recommendations to the user’s environment, resources, and urgency of the situation.
⸻
How we built it
We architected Rescura as a multi-agent orchestration system, with each agent designed as a modular component communicating through a shared context. These are the resources we used.
- Figma
- React
- OpenAI/GPT-4
- Grok Ai
- Llama
- Python
⸻
Challenges we ran into
Real-time alert integration: Balancing API polling rates, caching, and geographic filtering for timely but non-intrusive notifications was complex. Context sharing between agents: Ensuring seamless handoff without losing track of medical urgency or user location took multiple iterations. Image interpretation: Building a robust vision pipeline to identify injuries in varying lighting and backgrounds was challenging. Privacy and liability: Handling sensitive user data (e.g., location) required us to bake in strong disclaimers and responsible data usage practices.
⸻
Accomplishments that we’re proud of
Successfully integrated real-time disaster and Amber Alert APIs into an AI-driven medical support tool. Created a true multi-agent architecture where each agent specializes in a unique aspect of first aid and emergency care. Built support for multi-modal inputs (voice, text, image) to help users communicate even in difficult environments. Designed wilderness and disaster-aware protocols, enabling Rescura to offer guidance when infrastructure is compromised.
⸻
What we learned
Multi-agent systems unlock new possibilities in domain specialization for AI. Breaking down tasks into focused agents improved both accuracy and clarity. User input modality matters—emergency situations often make typing difficult, so supporting voice and images was critical. APIs can enrich AI: Real-time external data (like NWS alerts) adds real-world relevance that makes the AI smarter and more helpful. Context is key: Keeping a continuous thread through agent interactions is vital to delivering coherent, life-saving guidance.
⸻
What’s next for Rescura
Mobile app deployment with offline capabilities for use in remote wilderness or post-disaster environments Integration with emergency dispatch services to notify professionals if a high-severity case is detected Multilingual support for broader accessibility Expanded vision models trained specifically on medical injuries and environmental hazards Crowdsourced incident reporting to improve real-time situational awareness in disaster zones Partnering with non-profits and search-and-rescue teams to pilot Rescura in real-world scenarios
Built With
- css
- figma
- google-maps
- grok
- html
- javascript
- openai
- python


Log in or sign up for Devpost to join the conversation.