Defender for AI Agents ! A Feature I Completely Missed at Ignite
Defender for AI Agents ! Some initial thoughts on, Microsoft Agent 365 security - Azure AI Foundry security- Secure AI agent lifecycle.
I’ll be honest: I thought I’d done a decent job keeping up with the Ignite 2025 firehose. I skimmed the Book of News, watched one or two big keynotes, caught up on the usual Sentinel and XDR updates… and still completely sailed past one of the most important Defender announcements we’ve had in a while: Defender capabilities for AI agents. And yet, at 10 pm, working on something else completely, I found the preview tab in Defender! RABBIT HOLE OPENED.
So yes, I’m late to the party. But this is one of those “better late than never, and definitely before 1.3 billion agents land in your estate” moments.
Let’s unpack what this actually is, and more importantly, what it means if you enable it in a real, messy, sprawling enterprise with multiple teams building agents all over the place. (Any large company now!
Wait, what did Microsoft actually announce?
In the session “From risk to resilience: Secure your AI agents with Microsoft Defender” (BRK264), Microsoft essentially introduced AI agents as a first-class security asset in Defender, with capabilities that cover:
Visibility & posture – inventory of AI agents, misconfigurations, attack paths.
Runtime protection – jailbreak detection, blocking risky tool invocations in real time.
Hunting & investigations – new AI agent–specific tables in Advanced Hunting, with pre-built community queries.
This is all part of a broader story with Microsoft Agent 365 as the control plane for agents, plus Entra for identity and Purview for data security – but Defender is the bit that really matters to your SOC and security engineering teams.
The headline stat they anchored this on: IDC estimates 1.3 billion AI agents will be created by 2028. That’s effectively like dropping the population of India or China into your workforce, but made of bots.
And those agents aren’t just “chatbots.” They’re:
Customer service agents triaging tickets
Sales agents updating CRM
Agents reading mailboxes and firing workflows
Code-focused agents using tools and repositories
All of them are sitting on top of sensitive data and powerful tools.
Why agents change the threat model (and why this isn’t just another dashboard)
The session does a good job of making one point very clear: agentic AI introduces new attack surfaces that don’t look like traditional malware or “bad IP” events.
A few of the real-world examples they called out:
ShadowLeak – an email-reading AI agent that exfiltrated data, not because of a phishing click, but because it misinterpreted hidden instructions in the content it was reading.
Salesforce AgentForce data exposure – CRM agents leaking confidential customer data via prompt abuse.
Anthropic / Claude “code manipulation” case – where attackers effectively hacked the agent’s “brain” via crafted instructions rather than the infrastructure.
The big takeaway:
Attackers don’t need to compromise servers if they can just convince your agent to misuse its tools.
That’s why Defender’s approach here is split into two big buckets:
Start secure (posture) – make sure agents are built and configured safely in the first place.
Stay secure (runtime) – watch what agents are actually doing and block them when they go off the rails.
So what happens if you enable this in a sprawling environment?
Let’s get practical. Say you’ve got:
Copilot Studio agents built by business teams
“Code-first” agents built on Foundry by devs
A mix of clouds (Azure, maybe some AWS/GCP for data or tools)
A bunch of shadow agents you probably don’t know about yet
What actually changes when you turn on these new Defender capabilities?
1. Admins finally get “one pane of glass” for agents
Defender introduces a unified AI agent inventory alongside your existing assets (identities, endpoints, etc.). You can:
See all agents across Copilot Studio, Foundry and other integrated platforms in one place.
Sort and prioritize agents by risk, not just by name or owner.
Drill into an agent and see:
Which tools it can invoke (e.g., data access MCP servers, email senders)
What data it’s grounded on
Which other agents it calls
Whether Exposure Management has flagged it as a critical asset
In a large estate, that’s huge. It turns “random AI projects” into discoverable, classifiable, governable assets.
2. Posture becomes a shared responsibility between builders and security
Defender surfaces agent-specific security recommendations – things like:
“This agent invokes another agent but has no explicit instruction on how to use it”
“This agent is grounded on sensitive data but is missing guardrails”
“This MCP server has broader permissions (read/write/modify) than the agent realistically needs”
The clever bit: those same recommendations turn up in the builder platforms (e.g., Foundry), not just inside Defender. So instead of security sending a giant PDF, the dev/creator sees issues where they work, and the SOC sees them in their world.
This pushes you towards an agent DevSecOps model by default.
3. Attack paths now include agents – not just VM → DB
Defender’s attack path analysis pulls agents into the graph:
Agents
Their identities
The containers/pods they run in
The repos and images that define them
The cloud resources and data they touch
You end up with attack paths like:
Internet-exposed container → managed identity → AI agent → sensitive data tool → exfiltration
That’s very different from a classic “compromised user → mailbox” storyline, and it’s exactly the kind of thing you’re going to need to reason about as agents get closer to business-critical workflows.
What changes for the SOC?
From a SOC perspective, this isn’t a “nice to have”: it’s a new class of signals you’ll need to own.
1. New alerts and incidents tied to agents
Defender can now:
Detect jailbreak attempts against agents (e.g., hidden instructions in emails or CRM fields)
Block malicious tool invocations in real time (e.g., stopping an agent from sending external emails or writing to a datastore)
Correlate multiple agent-related alerts into a single incident graph that:
Shows the attacker’s sequence of attempts
Highlights the affected agents
Surfaces the tool invocations that were blocked
So your incident queue stops being “just endpoints and identities” and starts including agent-centric incidents with rich context.
2. New hunting tables and AI agent queries
They’ve added agent-focused tables into Advanced Hunting – for example, agent info and activity – plus a set of community queries to help you:
Find agents with excessive permissions
Identify agents grounded on sensitive data without adequate controls
Spot misconfigured or “risky” agents by pattern
This is where your hunters and SIEM engineers will start to build patterns and baselines for AI behaviour, not just user behaviour.
3. Runbooks need to include “what if the agent is the problem?”
Traditionally, the playbook ends with:
Block user
Isolate device
Kill process
With agents, you now have additional actions like:
Unpublish or disable the agent
Revoke or reduce MCP tool permissions
Quarantine the agent in Defender (this is something they’ve hinted at in future capabilities, aligned to how they do automated attack disruption for non-human identities).
Your SOC capability is going to need to catch up – fast.
Do we need to think about Sentinel telemetry and custom rules now?
Short answer: yes – if you’re a Sentinel shop, this should already be on your roadmap.
Defender is adding AI agent events into its own Advanced Hunting and incident pipeline. From there, you’ve got a few patterns to think about:
Incidents into Sentinel
Many orgs already use the Defender → Sentinel incident connector.
Agent incidents (jailbreak attempts, blocked tool invocations, misconfigurations leading to risk) become just another incident type you can correlate with everything else – identity, endpoint, OT, etc.
Raw telemetry into Sentinel
As agent-centric tables become available through Defender’s data export or native connectors, you should treat them like any other critical log source:
Map them into your data schema strategy
Plan retention based on risk/value, not “log everything forever”
Decide which events are high-signal enough to justify analytics rules
Custom analytics rules for AI agents
Here are a few I’d be looking at designing early:Repeated jailbreak attempts against a single agent
If one agent is constantly being probed, it’s a canary – or a high-value target.
Agents invoking sensitive tools outside normal behaviour
E.g., customer support agent suddenly bulk-reading “VIP customers” or exporting unusual datasets.
Agents with powerful MCP tools but no clear business owner
Shadow IT, but for agents.
Cross-signal correlation
Agent alert + unusual user sign-in + data exfil signal = much higher priority than any of those alone.
This isn’t about flooding Sentinel with yet more tables. It’s about deciding which AI events actually matter for detection engineering – and treating them with the same discipline you (hopefully) apply to your other high-value logs. This is a tough call
How I’d roll this out without creating chaos
If you’re thinking “this sounds great but my environment is already wild,” here’s a pragmatic way to approach it.
1. Start with a discovery sprint
Turn on the AI agent inventory in Defender.
Pull a list of:
Agents
Owners
Tool permissions
Data sources
Sit down with platform / app owners and ask the simple question:
“Which of these are business-critical, and which are ‘experiments that got out of hand’?”
2. Define your “tier 0” agents
Not all agents are equal. Decide which ones are:
Exposed to external input (email, web forms, support tickets)
Connected to sensitive data or powerful tools
In production workflows (not just POCs)
Those become your Tier 0 / crown jewel agents, and they get:
Stricter posture requirements
Tighter MCP permissions (least privilege, genuinely enforced)
Dedicated monitoring and response playbooks
3. Wire agents into your SOC workflows
Enable the Defender agent protections and incident integration.
Update your triage process to recognise:
“AI jailbreak attempt”
“Malicious tool invocation blocked”
Build or import the hunting queries that surface risky agents.
Design 1–2 Sentinel analytics rules specifically around high-risk agent scenarios (start small – don’t try to boil the ocean).
4. Add agents to your security reviews and change processes
Make sure new agent projects can’t go live without:
An Entra identity / registry entry (via Agent 365 as it matures)
A risk assessment and owner
A Defender posture review (just like you’d do for a new critical app)
Include agents in your regular exposure management reviews and threat modelling.
Wrapping up
I definitely missed this announcement in the noise of Ignite week, but having gone back through the session, I’d put this firmly in the “quietly massive” category.
We’re about to live in a world where agents are everywhere – reading mailboxes, updating CRMs, calling APIs, dropping tickets into queues. If we keep treating them as “just another Copilot experiment,” we’re going to get caught out.
Defender’s new AI agent capabilities don’t magically solve everything, but they give us three things we’ve badly needed:
A proper inventory of agents
Posture and attack path analysis that actually understands agent risk
Runtime detections and hunting that treat agents as first-class citizens in our SOC
If you’re running Sentinel and Defender today, this is the moment to start threading AI agent telemetry into your logging strategy, detection engineering, and incident handling, not bolt it on later as an afterthought.
If you’ve already turned this on, or you’re piloting it:
What have you found in your environment?
Any surprises, false positives, or “wow, that agent really shouldn’t have had that permission” moments?
Hit reply or share your experiences – this is an area where we’re all going to be learning fast, and I suspect the community war stories will be as valuable as the product docs.
Catch you soon!
Tags #
#MicrosoftSecurity
#MicrosoftLearn
#CyberSecurity
#MicrosoftSecurityCopilot
#Microsoft
#MSPartnerUK
#msftadvocate
WordCloud #
Secure AI Foundry
Secure AI Agent Foundry
AI Security Foundry
Secure Agentic AI
Enterprise AI Agent Security
Secure AI Operations
Secure AI at Scale
AI Safety by Design
AI-SecOps
AI-native SecOps
Microsoft Defender for AI agents
Microsoft Defender for AI security
Microsoft Agent 365 security
Secure Microsoft AI agents
Microsoft AI agent posture management
Defender AI agents Ignite
Microsoft secure AI development
Azure AI Foundry security
Secure Azure AI Foundry patterns
Azure AI agent security best practices









Brillant writeup on something I totally missed too. The shift from treating agents as "apps" to treating them as identities with attack paths is exactly the kind of paradigm shift security teams need to internalize quickly. I had a similr experiance pulling apart a customer support agent last month and realizing it could basically read anything in SharePoint with no real audit trail. The idea of agents as tier0 assets alongside domain controllers is gonna take some convincing, but it's the right fram.