Inspiration
When emergency calls come in (for example 911-style calls), there’s enormous pressure on dispatchers and officers to quickly interpret, prioritise, and route information to the right responders. We noticed that many dispatch systems still rely heavily on manual interpretation of free-text call summaries, legacy workflows, and human rules. We asked: What if an AI assistant could understand the call content, vectorise it, search relevant context, and then provide smart, actionable recommendations for dispatchers or officers to act on? That question led us to build Vigilis.
What it does
Vigilis consists of two major components:
The core system (repo at github.com/AI-ATL25/Vigilis ) that handles:
Ingesting a 911-style call (or summary thereof).
Vectorising the call text (converting to embeddings).
Retrieval via a knowledge base (previous incidents, geo/contextual data) and then using a generative language model (RAG: retrieval + generation) to produce recommendations: e.g., what type of unit to dispatch, estimated risk level, briefing summary of location/context.
Returning actionable output for dispatchers or officers to act on.
The companion app (repo at github.com/AI-ATL25/Vigilis_App ) that provides the user interface:
A mobile UI where a dispatcher or officer can enter or view an incoming call text.
The UI displays the processed insights from the core system: summary of the call, recommended actions/units, risk flags, previous incident context.
Enables interaction: e.g., accept recommendation, override it, mark as reviewed, log outcome.
This makes the system usable in a realistic scenario rather than just a backend proof-of-concept.
In essence: call → vectorise → retrieve + generate → UI presentation → action.
How we built it
Here’s a breakdown of our tech stack and workflow:
Backend (Vigilis repo):
Developed primarily in Python, implementing the ingestion of call content, embedding/vectorisation logic, retrieval of relevant context (via vector store or simple indexed store), and generation of recommendations with an LLM.
The repository structure separates modules for embeddings, retrieval, generation, and a service API layer.
We prepared deployment scripts and hosting instructions (e.g., RENDER_DEPLOY.md, Procfile, requirements.txt) to enable deployment on a cloud platform for the hackathon deliverable.
Frontend/App (Vigilis_App repo):
Built using a modern frontend framework (TypeScript + React/Next.js) — the commit metrics show ~53% TypeScript in the codebase, ~46% Python in the other repo, so we kept a clean separation between UI and backend.
The UI connects via REST/graph endpoints to the backend (or serverless endpoints) of the core Vigilis system.
It handles user flows: entering call text, viewing suggestions, reviewing past logs, overriding or accepting AI recommendations, and presenting contextual information (e.g., location map, prior incidents).
Integration & Deployment:
In the hackathon timeframe we managed to integrate the backend with the frontend, host via cloud (Render), set up simple data/knowledge-base seeds and demonstrated a working end-to-end demo.
We built modular architecture so additional features (e.g., mobile version, offline mode, more jurisdictions) can be added post-hackathon.
Challenges we ran into
Data & domain specificity: Getting realistic incident/dispatch data is sensitive and specialized. Our knowledge-base had to be synthetic or anonymised, limiting richness.
Embedding & retrieval quality: Call texts vary widely in quality (incomplete sentences, slang, noise). Tuning embeddings and ensuring the retrieval finds relevant prior incidents/context was tricky.
Latency & real-time constraints: For a dispatcher context delays matter. Ensuring vector search + LLM generation completes quickly enough (for a demo, and ideally for a real scenario) required optimisation and fallback logic.
UX / Frontend integration: Syncing backend results with the UI, handling edge cases (no recommendation possible, ambiguous call, user override) took more time than expected.
Ambiguity of calls: 911 calls often lack detail (“someone outside being aggressive”). Deciding when the system should flag “insufficient info” vs give a recommendation was a tough design trade-off.
Evaluation & validation: Measuring correctness of a recommendation in this domain is hard — we used scenario-based testing rather than large labelled datasets, which limits claims but was acceptable for the hackathon.
Accomplishments that we’re proud of
We built a full-stack prototype: ingesting call content in the app, backend processing, UI presentation of recommendations, and a simple deployment for demo.
We integrated state-of-the-art techniques: vector embeddings + retrieval + LLM generation (the RAG paradigm) in a domain (emergency dispatch) where this approach is less common.
We created a modular architecture: the separate backend and app repos make future expansion (mobile app, jurisdiction-specific modules, offline mode) easier.
We demonstrated, via a few scenario walkthroughs, that the system could reduce hypothetical decision time or at least provide structured support to dispatchers/officers.
In the short hackathon window we achieved deployment (via RENDER_DEPLOY.md, etc) which is a strong proof-of-concept for real-world potential.
What we learned
Working in a safety-critical domain (police dispatch) emphasizes higher stakes than many typical hackathon projects: reliability, clarity, fallback logic and error handling matter more than pure novelty.
The pipeline “embedding → retrieval → generation” is powerful, but the quality of the knowledge base (incident context, geo data, past outcomes) matters as much (if not more) than the model itself.
UX/Front-end and back-end must be planned in tandem: If UI is missing features or is awkward, the user won’t adopt even if backend is strong.
Time constraints highlight the need for robust defaults, clear error states, and graceful degradation: if the system cannot confidently recommend something, it should say so clearly.
Collaboration across disciplines (ML, backend, frontend, domain experts) is essential: we learned the value of dividing responsibilities, documenting well, and keeping clean interfaces even under pressure.
What’s next for Vigilis
Expand the knowledge base: Incorporate more realistic incident data, geospatial context (neighborhood crime patterns), real-time data feeds (traffic, weather, road closures) so recommendations are richer and more context-aware.
Improve recommendation logic: Add more tailored unit suggestions (e.g., K9, traffic, SWAT), timing/arrival estimates, escalation paths, dynamic prioritisation, and user feedback loops (learnt from dispatcher choices).
User testing & refinement: Partner with domain experts (real dispatchers, officers) to test the prototype, collect feedback on usability, trust in recommendations, false positives/negatives and refine accordingly.
Real-time integration & scale: Move toward connecting with live or simulated dispatch system, ensure performance/latency/reliability at scale, possibly multi-jurisdiction version.
Explainability & audit logging: For this domain we need to build features where the system explains why a certain recommendation was made (e.g., “Because previous incidents within 0.5 mi in the last year involved armed suspects at night”), and logs decision outcomes for review.
Broaden platform & deployment: Build mobile/vehicle-mounted UI for officers, offline/low-connectivity mode, support multi-agency coordination (police/fire/EMS).
Safety, ethics & bias mitigation: Conduct audit of potential biases (geographic, demographic), ensure fairness, build guardrails so AI does not replace human judgement but supports it, plan for data privacy/security in real deployment.


Log in or sign up for Devpost to join the conversation.