Building an Agentic Workforce with Microsoft Security Copilot
The 8am Alert Flood, Reimagined
It’s 8:00am. You sit down with that first cup of coffee, tap in your absurdly long password, and the overnight alerts start stacking up before you’ve even opened your calendar.
If you run a SOC (Security Operations Centre) or you’re the person everyone rings when identity goes sideways, you’ll know the feeling. Too much to do, too little time — and the work that matters most is usually buried under the work that’s simply loudest.
What I’ve been watching closely is the shift from “AI as a chat box” to “AI as a set of agents that actually do the drudge work”. Microsoft’s framing for this is Microsoft Security Copilot agents: an agentic workforce that shows up where teams already work, and takes on the repetitive tasks that quietly drain capacity.
The idea: agents that meet you where you work
The best design decision here is a simple one: don’t make people context-switch.
Security Copilot agents are positioned to live inside the tools teams already open all day:
Microsoft Defender (SOC work)
Microsoft Entra (identity and access)
Microsoft Intune (endpoint management)
Microsoft Purview (data security)
Under the covers, the foundation is Microsoft Sentinel, pulling together signals at huge scale from thousands of sources, so the agent has enough context to reason across what would otherwise be separate consoles and separate teams.
And the overall “agentic workforce” comes in three flavours:
Microsoft-built agents: ready in the workflow, built around the first 14 scenarios Microsoft prioritised from customer pain points.
Custom-built agents: you can create your own in low-code, no-code, or pro-code styles, depending on your comfort level and how bespoke your process is.
Partner-built agents: extend across a huge tool stack via the Microsoft Security Store, which is now generally available.
That last point matters in the real world: almost nobody runs a single-vendor security estate.
Start where the pain is: phishing triage in the SOC
If you want a quick test of whether “AI cybersecurity tools” are helping or just adding noise, look at phishing.
Most organisations encourage users to report anything suspicious. Great for coverage… brutal for the queue. It often means hundreds of emails per day, and someone in the SOC persona wades through them to find the handful that genuinely matter.
The Phishing Triage Agent in Microsoft Defender is aimed squarely at that pain.
What stands out to me architecturally
It’s not just “the agent decides”. The workflow is designed for trust-building:
The agent produces a plain-English incident summary: what it saw, what evidence it used, and what judgment it reached.
It shows an activity map of the logical steps it took — things like message analysis, header inspection, detonation, and link reputation checks.
You can drill into any step to see the action, the evidence, and the verdict.
That transparency is how you turn AI from a novelty into an operational tool. Analysts don’t need magic. They need something they can challenge, audit, and teach.
Teaching the agent your organisation’s reality
One of the most practical moments is the ability to correct classification.
If the agent flags something as malicious but you recognise it as a training email from a vendor, you can change it to “not malicious” and tell the agent to treat that sender as a false positive going forward. The agent then applies that feedback next time.
Crucially, the model here isn’t pretending agents are perfect. The design assumes they’ll be conservative — a bit pessimistic — and that they need to learn what “normal” looks like in your environment.
The impact numbers being cited
In some Microsoft webinars, the examples shared stated:
Organisations were seeing 1,500+ user-submitted phishing alerts per week, with 31% left uninvestigated before agents.
With the Phishing Triage Agent: 78.2% faster triage and 77% more accurate verdicts were reported.
In another measure, the agent helped users identify 6.5x more malicious email, described as a 550% efficiency gain for SOC analysts.
A customer example referenced 200 hours saved per month on phishing triage.
Even if you treat the percentages cautiously (and you should), the operational shape is what matters: fewer clicks, fewer queues, more time for the messy work humans are still best at.
Identity next: Conditional Access optimisation without the fear
Once phishing gets through, attackers typically go after identity. And identity admins live with a particular kind of anxiety: Conditional Access policies are easy to add, hard to remove, and terrifying to change at scale.
The Conditional Access Optimization Agent in Microsoft Entra is pitched as something like a “Zero Trust consultant that works for you” — but the useful part is what it does in practice:
Gives an executive-style summary: number of policies, gaps found, and recommendations.
Helps identify where policies overlap, where gaps exist, and where enforcement (like MFA under risky conditions) isn’t consistent.
The bit I’d underline for governance teams
It’s designed to support phased rollout rather than “click apply and pray”.
It leans on Conditional Access report-only mode, letting you see impact before enforcing, and then encourages rolling out in rings: pilot group first, then gradually expanding.
And it doesn’t force admins to live in yet another portal: the agent can reach out in Teams, and you can ask it why it made a recommendation and what data it relied on.
The impact
The before/after examples provided widely by Microsoft included:
Orgs with the agent: 43% faster completion of Conditional Access tasks.
Detection of missing Zero Trust baseline policies on average was 77% more in one place, and also 204% more in a referenced randomised controlled trial (October 2025).
The exact uplift is less interesting than the operating model shift: you’re reducing “approval by fatigue” and making policy work more deliberate.
Devices: the offboarding everyone delays
Ask a room of security leaders if they’re 100% sure every device in the estate should be there, and you’ll get awkward laughter.
Device offboarding is high ROI and chronically under-done, partly because it’s fiddly: queries, data retention assumptions, multiple tools, manual steps.
The Device Offboarding Agent in Microsoft Intune goes after that by letting you ask in plain language for something like:
“Show me personal iOS devices that haven’t checked in for 30 days.”
It can generate the rule, recommend next steps, provide summaries, and even allow you to download a device list for deeper investigation. And once you trust it, it offers one-click offboarding across the relevant apps.
That “one-click offboarding” idea is more than convenience. It’s how you reduce the time between knowing something is risky and actually removing it.
When Microsoft-built isn’t enough: building your own agents
Most mature security teams have slightly odd processes — ticketing rules, reporting formats, escalation paths, or internal approval steps that don’t match anyone else’s.
The custom-built approach described is intentionally approachable:
Describe what you want in natural language (no-code is a first-class option).
Choose data sources and connect tools via Model Context Protocol (MCP).
Use an “auto tune” feedback loop to improve prompt quality and performance characteristics like intent resolution and relevance.
End up with a YAML manifest you can open, customise, and manage more like code.
Once published, your agent sits alongside the others in your environment, rather than becoming yet another “special project” that nobody remembers to use.
Partners and the Security Store
Because most organisations run mixed tooling, the partner ecosystem is positioned as a marketplace inside the flow of work.
An example is the Watchtower Agent by BlueVoyant, it can be deployed via the Microsoft Security Store, then configured and run with the same patterns: consistent setup, clear results, prioritised issues, and recommended mitigation steps.
Also worth noting: agents can be scheduled, run manually, or triggered by an event — which is exactly what you want if you’re trying to build repeatable controls rather than heroic response.
Where this goes next: from task agents to job orchestration
The most forward-looking idea is the split between:
Task agents: experts in one thing (triage, policy gap detection, device cleanup).
Job agents: orchestrators that pull multiple task agents together to complete an end-to-end objective.
A Zero Trust posture “job agent”, for example, could:
Identify and prioritise policy gaps (task agent)
Drive approvals through change management (task agent)
Produce an executive summary and close tickets (task agent)
The point isn’t replacing people. It’s reducing the operational friction between silos so that “security is a team sport” becomes something you can actually run day to day.
The practical implication for leaders
There was one big commercial/operational note in 2025: Security Copilot is now included for all Microsoft 365 E5 customers, with rollout starting for some tenants and continuing over the coming months (Dec 2025), with a 30-day notice before it rolls out to a tenant.
If you’re responsible for governance, that’s your cue to get ahead of it. Not with fear — with design:
Decide where agents are allowed to act automatically vs where they must recommend only.
Treat agent identities and permissions like human roles: least privilege, clear accountability.
Build feedback loops so the system learns what “normal” means in your organisation.
Use phased rollout patterns (pilot groups, rings, report-only) for anything that touches access or enforcement.
A quiet takeaway to sit with
The line that stuck with me in 2025 was the simplest: agents take away the boring work first.
If you can reclaim even a slice of time from queues and repetitive checks, you give your people room to do the work that actually improves resilience — threat chasing, hardening, and the messy judgment calls that don’t fit into neat automation.
Over the next few months, I’d be asking one question over that morning coffee: what’s the most soul-destroying, repeatable security task in our week — and what would it look like if an agent owned it end to end?
Got an idea? Post it in the comments, we will try and make a few!
That’s all for now!
2026 the year of Security Copilot & Agents has started!
#MicrosoftSecurity
#MicrosoftLearn
#CyberSecurity
#MicrosoftSecurityCopilot
#Microsoft
#MSPartnerUK
#msftadvocate



