AI agents are about to handle your most sensitive data. There's no standard way to know what they do with it.
Contents: The Problem · The Solution · Four Presets · How It Works · See It in Action · For Developers · Regulatory Landscape · Trust Roadmap · Status · Join the Conversation
A company asks an AI recruiting agent to find a senior developer. That agent calls a background check agent, which calls a credit score agent, which calls an identity verification agent. The candidate's CV, employment history, social security number, and biometric data just crossed four services — in seconds, with no visibility into what each service does with that data.
flowchart LR
A[Recruiting Agent] -->|CV, Name| B[Background Check]
B -->|SSN, History| C[Credit Score]
C -->|Gov. ID, Biometrics| D[Identity Check]
D ~~~ S[ ]
style S fill:none,stroke:none
At each step: does the agent store your data? Use it for training? Share it with third parties? Process it in which country? Today, there is no standard way to know.
This isn't hypothetical. MCP (Anthropic) has 97M+ monthly SDK downloads. A2A (Google) connects agents to agents. The infrastructure for autonomous agent chains is here — the privacy layer is not.
ADHP is an open specification — a machine-readable privacy language for AI agents.
Two sides, one vocabulary:
- Data handlers declare what they do with data (policies).
- Data senders declare what they require (requirements).
A deterministic matching algorithm checks compatibility before any data is exchanged.
flowchart LR
Sender -->|Requirements| Match{match}
Handler -->|Policies| Match
Match --> Pass[✅ Compatible]
Match --> Fail[❌ Incompatible]
Presets are named baselines — like Creative Commons licenses for data handling.
| Preset | Retention | Sharing | Key Guarantees |
|---|---|---|---|
open |
Legal maximum | Allowed | No restrictions beyond law. |
standard |
Explicit (required) | Allowed | No marketing, no profiling. max_retention mandatory. |
strict |
Session only | Prohibited | + No training, no research, no content logging. No delegation. |
zero_trace |
None | Prohibited | Nothing persists. No logs beyond legal floor. No delegation. |
Each preset level satisfies all lower requirements: a strict handler always matches a standard requirement.
Extras add constraints on top of any preset: no_training, no_log, no_third_party, tee_execution, right_to_erasure, and more. Full list in the spec →
// Data handler declares:
{
"adhp": "0.3",
"policies": [{
"frameworks": ["gdpr"],
"preset": "standard",
"extras": ["no_training"],
"max_retention": "P6M",
"jurisdiction": { "processing": ["DE"], "storage": ["DE"] }
}]
}// Data sender requires:
{
"adhp": "0.3",
"require": [{
"frameworks": ["gdpr"],
"min_preset": "standard",
"extras": ["no_training"],
"accepted_jurisdictions": ["EU"],
"max_retention": "P1Y"
}]
}The matching algorithm runs six checks: frameworks, preset level, extras, jurisdiction, data categories, and retention. All pass → compatible. Any fails → incompatible.
ADHP plugs into the agent ecosystem's connection layer:
| Protocol | Integration |
|---|---|
| MCP (Anthropic) | Policy in the capabilities handshake. Client evaluates locally before sending data. |
| A2A (Google) | Policy in Agent Card extensions. Registries pre-filter by requirements. |
When agents delegate to other agents, requirements travel through the chain. Each downstream handler must pass match() — requirements can only tighten, never loosen.
flowchart LR
U[Sender: min standard, EU only] --> A[Handler A — standard, DE ✅]
A -->|passes requirements| B[Handler B — standard, FR ✅]
B -.->|blocked| C[Handler C — open, US ❌]
C ~~~ S[ ]
style S fill:none,stroke:none
strictandzero_tracepresets prohibit delegation entirely — data stays with the handler.
Configure sender requirements and handler policies, then watch ADHP matching happen in real time.
pip install jsonschemaThen validate any ADHP document against the schema:
jsonschema -i my-policy.json schemas/adhp-v0.3.schema.jsonSchema: schemas/adhp-v0.3.schema.json (JSON Schema draft 2020-12)
The simplest valid policy:
{ "adhp": "0.3", "policies": [{ "frameworks": ["gdpr"], "preset": "open" }] }A responsible baseline (most common):
{
"adhp": "0.3",
"policies": [{
"frameworks": ["gdpr"],
"preset": "standard",
"extras": ["no_training"],
"max_retention": "P1Y",
"jurisdiction": { "processing": ["EU"], "storage": ["EU"] }
}]
}from adhp import match
result = match(handler_policy, sender_requirements)
if result.compatible:
# Route to matched policy flow
print(f"Matched: {result.matched_policies}")
else:
# Inspect failures
for f in result.failures:
print(f" ✗ {f.check}: {f.message}")ADHP is framework-aware — each policy declares which regulatory framework it supports. The matching algorithm ensures framework-specific requirements are met.
| Framework | What It Requires | How ADHP Helps |
|---|---|---|
| GDPR (EU) | Controller accountability for every sub-processor (Art. 28) | Machine-readable declarations across full delegation chains |
| UK GDPR | Same obligations, UK-specific context | Separate framework ID enables distinct preset semantics |
| EU AI Act | Transparency obligations for AI systems (Art. 50) | Standardized, inspectable data handling format |
| CCPA (US) | Consumer right to know about data sharing | Sharing practices declared and verifiable at match time |
| HIPAA (US) | Business Associate Agreements for health data | Health data handling declarations with sector-specific preset semantics |
ADHP does not replace legal compliance. It provides a common vocabulary and grammar for systems to communicate about regulations — not a substitute for DPAs, DPIAs, or legal agreements.
"But what if an agent lies about its policy?"
ADHP is a language, not an enforcement mechanism. Verification is performed by external parties — ADHP only provides fields to declare verification status. Each phase adds metadata that raises the cost of lying:
| Phase | What | ADHP's role |
|---|---|---|
| 0 | Protocol Definition — Define the language, schema, matching algorithm | The spec itself |
| 1 | Self-Declaration — Agents declare their practices | Policy fields (current) |
| 2 | Third-Party Audit — External auditors verify practices | Fields: audited_by, audit_date, audit_url |
| 3 | Automated Testing — Auditor agents test with canary data | Fields: last_tested, test_result, tester_id |
| 4 | Cryptographic Attestation — TEE, signed code, ZK proofs | Fields: attestation, signature, tee_report_url |
We are here: Phase 0. ADHP is designed as a foundation layer that enables verification and enforcement systems to be built on top. Each phase adds metadata fields to record that verification happened, who did it, and how to check.
Version: 0.3.0 (Draft) · License: Apache 2.0
| Status | Milestone |
|---|---|
| ✅ | Spec v0.2 — 5 levels, delegation cascading |
| ✅ | Interactive playground & live MCP demo |
| ✅ | Spec v0.3 — Framework-based presets, bidirectional matching, extras, JSON Schema |
| 🔜 | Python SDK update for v0.3 |
| 🔜 | v0.4 — Autonomous vs DPA delegation, sub-processor declarations, case retention |
| 🎯 | Propose as MCP extension |
We're building this in the open. Feedback welcome from developers, DPOs, privacy engineers, legal practitioners, and anyone who cares about data privacy in an AI-powered world.
- Architecture — where ADHP sits in the stack
- Enforcement patterns — from self-declaration to cryptographic proof
- Complex jurisdiction modeling
- EU AI Act & the autonomous agent compliance problem
Open a Discussion for ideas, an Issue for bugs, or submit a PR.