hand-waveICME PreFlight API

ICME PreFlight API gives AI agents cryptographic guardrails. It uses automated reasoning to enforce policy rules and zero-knowledge proofs to prove every decision was made correctly.

This makes high-stakes agent actions verifiable, private, and tamper-evident. It is built for agentic commerce, autonomous payments, privacy enforcement, and any workflow where an AI agent can take real-world action.

Start with the Quickstart. Then read How It Works for the technical model.


Why ICME PreFlight is different

Most AI guardrails still rely on model judgment. That fails under pressure. A prompt can influence the same type of system enforcing the rule.

ICME PreFlight removes judgment from enforcement. Policies are compiled into formal logic. Agent actions are checked by a solver. Each result is wrapped in a verifiable proof.

You get:

  • Formal policy enforcement instead of prompt-based guessing

  • SAT / UNSAT decisions from a mathematical solver

  • Zero-knowledge proof receipts for every decision

  • Private verification without exposing your policy

  • Sub-second proof verification by other machines


The research foundation

Automated reasoning for AI policy enforcement

In 2025, AWS researchers published Automated Reasoning Checks (ARc)arrow-up-right. ARc translates natural language policies into SMT-LIB formal logic and checks actions with a mathematical solver.

The key idea is simple. Use an LLM only for translation. Use formal logic for enforcement. That removes model judgment from the final allow-or-block decision.

Zero-knowledge proofs for verifiable agentic commerce

ICME extends that pipeline with Succinctly Verifiable Agentic Guardrails With ZKP Over Automated Reasoningarrow-up-right. This adds a cryptographic proof layer on top of policy enforcement.

That matters in agentic commerce. Other services and agents need proof that a rule check happened correctly. They should not need to re-run the full reasoning pipeline. They should not need to trust the provider. They should not need to see the policy.

ICME PreFlight solves that with a succinct proof any machine can verify quickly.


What the two papers establish together

AWS ARC (2511.09008)
ICME (2602.17452)

Natural language → formal logic

SMT solver enforcement

Soundness

99%+

99%+

Cryptographic proof of enforcement

Succinct verification (< 1 second)

Private policy

Trustless agent-to-agent verification

Designed for agentic commerce


How ICME PreFlight works

1. Write your policy in plain English. No formal logic required.

2. Compile the policy into formal logic. ICME translates it into SMT-LIB and checks it for consistency.

3. Check every agent action against the solver. SAT means allowed. UNSAT means blocked.

4. Generate a zero-knowledge proof. Each decision gets a cryptographic receipt.

5. Let other systems verify the result. They confirm the decision without re-executing the full check or seeing the policy.

This is the core workflow behind verifiable AI agent guardrails.


Example: block an unsafe agent action

Response

The action is blocked. The response includes a proof receipt. Another machine can verify that proof independently.


How agents produce action text

AI agents don't generate a clean action string on their own. The action text that gets sent to checkIt comes from one of three integration patterns depending on how your agent is built.

Tool call interception. Most agent frameworks (OpenClaw, LangChain, Claude tool-use, OpenAI Agents SDK) produce structured tool calls before executing them. The agent decides to call a tool and outputs something like:

Your middleware intercepts this before execution and serializes it into an action string: "Send email to [email protected] with subject 'API access'." That string goes to checkIt. If the result is SAT, the tool call proceeds. If UNSAT, it's blocked before the email is ever sent.

Skill-directed description. In OpenClaw and similar skill-based agents, the SKILL.md instructs the agent to describe what it's about to do before doing it. The agent follows those instructions as part of its normal workflow. It's not thinking out loud for itself. It's producing the description because the skill told it to. The PreFlight skillarrow-up-right includes guidelines on writing specific, complete action descriptions.

Planning step interception. Agents that plan before acting (like those using Capability Evolver or multi-step chains) produce a plan that describes each step. Each step in the plan is a natural action string you can check before execution begins. This catches contradictions and policy violations at the planning stage, not after step 3 has already run.

In all three patterns, the key is the same: intercept the action description before execution, send it to the solver, and only proceed on SAT.


Best fit use cases

If your agent handles money, sensitive data, or consequential decisions, this is how you make it provably safe.

  • AI agents that move handle any significant value

  • Agentic commerce systems that buy, sell, refund, or negotiate

  • Enterprise copilots that access internal tools or private data

  • Privacy workflows that must enforce data access and sharing rules

  • High-trust APIs where third parties need proof of compliant behavior

For real examples, see Crypto Wallet Agent Protection, Fake Merchant & Phishing Attacks, and HIPAA Patient Data Sharing.


Start here

Last updated