Proposal: Governance Middleware for PydanticAI
Problem
PydanticAI brings type safety and structured outputs to agent development, but currently lacks a built-in governance layer for enforcing safety policies on agent tool use and actions. Given that PydanticAI is Pydantic-native, governance policies expressed as Pydantic models would be a natural, type-safe extension.
What we've built (Apache-2.0)
Agent-OS governance is already Pydantic-based:
- GovernancePolicy (Pydantic dataclass) - max_tokens, max_tool_calls, blocked_patterns with PatternType enum
- PatternType - Substrate/regex/glob pattern matching with pre-compilation
- Semantic intent classifier - 9 threat categories, deterministic, no LLM dependency
- Event hooks - POLICY_CHECK, POLICY_VIOLATION, TOOL_CALL_BLOCKED
- YAML import/export - Policies as YAML files alongside code
- Policy diff/comparison - is_stricter_than(), format_diff()
Proposed integration
A PydanticAI middleware that wraps tool execution with governance checks:
`python
from pydantic_ai import Agent
from pydantic_ai_governance import GovernancePolicy, govern
policy = GovernancePolicy(
max_tokens_per_request=4096,
max_tool_calls_per_request=10,
blocked_patterns=[
("rm -rf", PatternType.SUBSTRING),
(r".password.=.*", PatternType.REGEX),
],
)
agent = Agent("openai:gpt-4o", deps_type=MyDeps)
@agent.tool
@govern(policy) # Decorator-based governance
async def dangerous_tool(ctx, query: str) -> str:
...
`
Why this is a natural fit
- Pydantic-native - Our GovernancePolicy is already a Pydantic model; zero impedance mismatch
- Type-safe - PatternType enum, GovernanceEventType enum, structured validation
- Composable - Works as a decorator, middleware, or dependency injection
- Logfire integration - Our OTEL conventions align with Pydantic's Logfire observability
- 700+ tests backing the engine
Ask
Is there interest in governance middleware for PydanticAI? Options:
- Standalone
pydantic-ai-governance package
- PR to core adding optional governance hooks in tool execution
- Cookbook/example showing the pattern
Happy to discuss the best approach.
Proposal: Governance Middleware for PydanticAI
Problem
PydanticAI brings type safety and structured outputs to agent development, but currently lacks a built-in governance layer for enforcing safety policies on agent tool use and actions. Given that PydanticAI is Pydantic-native, governance policies expressed as Pydantic models would be a natural, type-safe extension.
What we've built (Apache-2.0)
Agent-OS governance is already Pydantic-based:
Proposed integration
A PydanticAI middleware that wraps tool execution with governance checks:
`python
from pydantic_ai import Agent
from pydantic_ai_governance import GovernancePolicy, govern
policy = GovernancePolicy(
max_tokens_per_request=4096,
max_tool_calls_per_request=10,
blocked_patterns=[
("rm -rf", PatternType.SUBSTRING),
(r".password.=.*", PatternType.REGEX),
],
)
agent = Agent("openai:gpt-4o", deps_type=MyDeps)
@agent.tool
@govern(policy) # Decorator-based governance
async def dangerous_tool(ctx, query: str) -> str:
...
`
Why this is a natural fit
Ask
Is there interest in governance middleware for PydanticAI? Options:
pydantic-ai-governancepackageHappy to discuss the best approach.