Proposal: Agent Governance Skill
Summary
A skill that teaches Claude governance patterns for AI agent systems — policy enforcement, threat detection, trust scoring, and audit trails. Currently the skills collection covers creative, technical, and enterprise workflows, but nothing focused on agent safety and governance.
Use Case
When users are building AI agents (using Claude Code, ADK, PydanticAI, CrewAI, OpenAI Agents, etc.), this skill would activate to guide them toward safe patterns:
- Defining governance policies (tool allowlists/blocklists, content filters, rate limits)
- Adding threat detection (data exfiltration, prompt injection, privilege escalation)
- Implementing trust scoring for multi-agent delegation
- Building append-only audit trails for compliance
- Policy composition (org → team → agent layering)
Proposed Skill
---
name: agent-governance
description: |
Patterns and techniques for adding governance, safety, and trust controls
to AI agent systems. Use this skill when building agents that call external
tools, implementing policy-based access controls, adding threat detection,
creating trust scoring for multi-agent workflows, or building audit trails.
---
# Agent Governance Patterns
[6 patterns: Governance Policy, Policy Composition, Semantic Intent Classification,
Tool-Level Governance Decorator, Trust Scoring, Audit Trail]
[Framework integration: PydanticAI, CrewAI, OpenAI Agents, Google ADK, LangChain]
We've already built this as a Copilot skill (merged as PR #755 in github/awesome-copilot) and can adapt it to the Agent Skills format.
Context
We maintain Agent-OS and AgentMesh Integrations — governance frameworks with integrations for PydanticAI (57 tests), CrewAI, OpenAI Agents, and Google ADK. Also proposed governance patterns for Google ADK, Genkit, and A2A protocol.
Proposal: Agent Governance Skill
Summary
A skill that teaches Claude governance patterns for AI agent systems — policy enforcement, threat detection, trust scoring, and audit trails. Currently the skills collection covers creative, technical, and enterprise workflows, but nothing focused on agent safety and governance.
Use Case
When users are building AI agents (using Claude Code, ADK, PydanticAI, CrewAI, OpenAI Agents, etc.), this skill would activate to guide them toward safe patterns:
Proposed Skill
We've already built this as a Copilot skill (merged as PR #755 in github/awesome-copilot) and can adapt it to the Agent Skills format.
Context
We maintain Agent-OS and AgentMesh Integrations — governance frameworks with integrations for PydanticAI (57 tests), CrewAI, OpenAI Agents, and Google ADK. Also proposed governance patterns for Google ADK, Genkit, and A2A protocol.