Inspiration
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learne🛡️ Azure SentinelAI Compliance
What Inspired Me Cloud security failures make headlines every week — but the uncomfortable truth is that most breaches aren't sophisticated attacks. They're preventable misconfigurations: a storage bucket left public, a database with encryption disabled, a firewall with all ports open to the internet. I've watched organizations spend months preparing for compliance audits, hiring expensive consultants who arrive with spreadsheets, manually check hundreds of resources, and produce a report that's already outdated the moment it's printed. The average cost of a compliance failure is \$5.9 million — and the technology to prevent it has existed for years. It just hadn't been connected together intelligently. That gap is what inspired Azure SentinelAI Compliance.
What I Built An autonomous multi-agent AI system that replaces the entire manual compliance audit workflow with a single button click. The system runs four specialized AI agents in a sequential pipeline: Scanner→Compliance→Remediation→Executive\text{Scanner} \rightarrow \text{Compliance} \rightarrow \text{Remediation} \rightarrow \text{Executive}Scanner→Compliance→Remediation→Executive AgentResponsibilityScanner AgentQueries Azure Resource Graph for real misconfigurationsCompliance AgentUses GPT-4 to map findings to ISO 27001, SOC 2, GDPRRemediation AgentGenerates executable Azure CLI + Terraform fixesExecutive AgentCalculates weighted risk score, produces PDF report The risk score is calculated using a severity-weighted penalty model: Risk Score=max(0, 100−min(∑i=1nwi, 100))\text{Risk Score} = \max\left(0,\ 100 - \min\left(\sum_{i=1}^{n} w_i,\ 100\right)\right)Risk Score=max(0, 100−min(i=1∑nwi, 100)) Where weights are: Critical = 25, High = 15, Medium = 5, Low = 1 The entire pipeline completes in under 60 seconds. It supports two modes — live mode connecting to a real Azure subscription via DefaultAzureCredential, and demo mode that activates automatically when no credentials are provided.
How I Built It Backend — FastAPI (Python) orchestrates the four agents. The Scanner Agent uses the Azure Resource Graph SDK to run Kusto queries against real Azure infrastructure. The Compliance Agent sends each finding to Azure OpenAI GPT-4 with a structured prompt requesting ISO 27001, SOC 2, and GDPR mappings. The Remediation Agent uses a rule engine to generate safe, resource-specific Azure CLI and Terraform commands. The Executive Agent aggregates everything into a risk score and triggers PDF generation via ReportLab. Frontend — React 18 + TypeScript + Vite with Tailwind CSS and shadcn/ui. The dashboard shows a live risk score, donut chart, per-finding compliance mappings with remediation commands, and a one-click PDF download. A Demo/Live Mode badge tells users exactly which data source is active. Infrastructure — Terraform provisions an Azure App Service (Linux, Python 3.11) for deployment. GitHub Actions CI validates every push with backend dependency installation and frontend build checks. Azure Resource Graph ──► Scanner Agent │ Raw Findings │ Azure OpenAI GPT-4 ──► Compliance Agent │ Mapped Findings │ Remediation Agent ──► CLI + Terraform │ Executive Agent ──► Risk Score + PDF
Challenges I Faced
- Graceful degradation without crashing The biggest engineering challenge was making the system work reliably even when Azure credentials or OpenAI keys aren't configured. Every agent needed try/except boundaries and intelligent fallback logic so the server never crashes — it just switches modes silently.
- Compliance mapping accuracy Getting GPT-4 to return structured, consistent compliance mappings required careful prompt engineering. The model needed to output specific control IDs (like ISO 27001 A.13.1.1) rather than vague descriptions. I solved this by providing the exact JSON schema in the prompt and adding mock enrichment as a reliable fallback.
- Connecting four agents without data loss Each agent transforms the findings object and passes it downstream. Ensuring that remediation data, compliance context, and resource metadata all survived the full pipeline without being overwritten required careful dictionary merging and Pydantic model design with extra: allow.
- Making demo mode indistinguishable from real The demo data needed to be realistic enough that judges couldn't tell it apart from a live scan. I used real Azure resource ID formats, real Kusto query structures, and real compliance control IDs so the demo mode tells a completely authentic story.
What I Learned
Multi-agent architecture isn't just an AI buzzword — it's a genuinely powerful pattern for separating concerns in complex workflows Azure Resource Graph is extraordinarily powerful for cross-subscription security analysis at scale Prompt engineering for structured output is a skill in itself — the difference between a useful compliance mapping and a vague one comes down to schema specification in the prompt Building for graceful degradation from day one makes the difference between a demo that works and one that crashes in front of judges.
Log in or sign up for Devpost to join the conversation.