AISecOps · aisecops.net

Your AI agent
is acting.
Is it governed?

Agentic AI systems retrieve data, call tools, and take real actions — with your credentials, on your behalf. AISecOps is the emerging discipline for securing them at runtime: policy enforcement, least-privilege tool access, audit trails, and containment.

aisecops-gateway · audit.jsonl
// Retrieval sanitizer — chunk inspected
{
  "event": "retrieval_poisoning_detected",
  "severity": "high",
  "action": "chunk_removed",
  "tenant": "acme-corp"
}

// Tool gateway — policy evaluated
{
  "tool": "send_email",
  "policy_decision": "DENY",
  "reason": "recipient_not_allowlisted",
  "correlation_id": "cid-8821"
}

// Agent continues — uncompromised
aisecops_tool_block_total{policy="email"} 1
prometheus scrape → grafana dashboard ✓

$ _
Framework covers
Prompt Injection Tool Abuse Memory Poisoning Policy Enforcement Audit & Telemetry Multi-Tenant Isolation
The Problem

Agentic AI changed the threat model.
Most teams haven't caught up.

Enterprises are deploying AI agents that browse the web, read email, query databases, and execute code. The traditional security stack was not built for this.

Before AISecOps

Your AI agent calls a tool. You see a log entry. There's no policy engine — the call either succeeds or fails at the API level. You have no visibility into what the model was instructed to do, why it chose that tool, or whether the retrieved context was clean.

With AISecOps

Every tool call passes through a policy gateway. Retrieved context is sanitized before the model sees it. Every decision emits a structured audit event. Prometheus tracks attack velocity. Grafana surfaces anomalies. The agent is governed — not just deployed.

What attackers exploit

Indirect prompt injection via RAG. Tool parameter manipulation. Memory context poisoning. Policy drift as models and prompts evolve. These are not theoretical — they have been demonstrated in production agentic systems.

What this framework provides

A layered control architecture: context validation, capability containment, execution sandboxing, and observability. Open-source reference implementations. An enterprise adoption guide. Threat model mapped to OWASP LLM Top 10.

The Framework

Four layers. No single one is enough.

Securing an agentic AI system requires controls at every layer of the stack — from what data reaches the model to what actions it can take.

L1 Context — Trust Boundaries

Validate and sanitize all external data before it enters the model's context window. Treat every retrieved document, memory chunk, and tool response as untrusted input that must be inspected for injection patterns.

L2 Capability — Least-Privilege Tools

Enforce explicit tool allowlists and parameter validation before any external call is executed. The policy engine — not the model — decides what actions are permitted. Deny by default.

L3 Execution — Sandboxing & Gates

Run high-risk tool execution in isolated environments. Define thresholds for irreversible actions that require human approval before proceeding. Automated containment for detected anomalies.

L4 Observability — Audit & Telemetry

Emit structured security events at every decision point. Export Prometheus metrics for real-time attack visibility. Enable forensic replay and policy version comparison across deployments.

View the Reference Architecture →
Resources

Everything is open and free.

Framework documentation, threat models, reference architecture, and working open-source code. No account required.