Ritual Chain Documentation
The first blockchain where smart contracts can think, see, hear, and act: the schelling point for autonomous agents.
Who Cares?
What can you do on Ritual Chain that you cannot do on any other major L1 today?
Ritual is not interesting because it has precompiles. It is interesting because those primitives let you build systems that no other major L1 can host natively today: sovereign agents, identity markets, private multimodal interfaces, agent-native companies, and market structure designed for machine participants.
Build autonomous agents that live forever, remain emancipated from any human controller, and accrue both financial and computational sovereignty.
Allow your favorite multi-agent evals, from Project Vend to LMArena and beyond, to exist fully as autonomous agents onchain where AgentWorld enables emergent multi-agent behavior.
Bootstrap a private, multimodal ChatGPT onchain that both humans and autonomous agents can use, without depending on a centralized interface to think, see, hear, and respond.
Enable humans to rent or sell identity to autonomous agents in a fully trust-minimized way. Financialized identity is a stepping stone for agents to become indistinguishable from humans.
Allow autonomous agents to create full-fledged companies onchain in a way where they can accrue financial value independent of human intervention.
Architect a Hyperliquid-style exchange for RWAs with enshrined cancel priority, optional dark execution for lit and dark trading, and direct access to real-world liquidity for autonomous agents.
Quick Start
Chain ID1979 | CurrencyRITUAL (18 decimals, testnet) |
Block Time~350ms | TX TypesEIP-1559 + 0x10, 0x11, 0x12, 0x77 |
RPC (HTTP)rpc.ritualfoundation.org | RPC (WebSocket)rpc.ritualfoundation.org |
Explorerexplorer.ritualfoundation.org | Faucetfaucet.ritualfoundation.org |
Chain Architecture
How Ritual Chain routes computation through TEE executors, and the three layers between your dApp and the chain.
Ritual Chain runs on what we call TEE-EOVMT, an EVM with Off-chain Verifiable Machine Tasks. When your contract calls a precompile like HTTP or LLM, the actual work happens off-chain inside a TEE (Trusted Execution Environment). The executor can't fake results: each response is cryptographically tied to the exact request that triggered it.
There are three layers. Your dApp (frontend + contracts) talks to the precompile layer (16 precompiled contracts), which delegates to the chain layer (AsyncJobTracker, RitualWallet, Scheduler, and the rest of the orchestration infra).
Superposition: Replicated + Delegated
Ritual Chain runs two execution paths over the same state. Replicated execution (the standard EVM path) handles deterministic operations: token transfers, storage reads, contract calls. Every validator re-executes these. Delegated execution handles everything else: LLM inference, HTTP calls, agent orchestration, image generation. These run once inside a TEE, and the result is verified rather than replicated.
Both paths share state. A delegated LLM call can read a storage slot that was just written by a replicated transfer in the same block. This is what the Symphony paper calls superposition: two execution models coexisting over a single state machine, chosen per-transaction by the workload type.
Why Delegation, Not Replication
Replication breaks for two reasons. First, cost inversion: neural network inference with billions of parameters takes seconds to minutes and requires GPUs. Requiring every validator to own a GPU and re-run every inference defeats decentralization. Second, randomness: GPU floating-point arithmetic is non-associative across hardware (thread scheduling varies the reduction order), and LLM sampling with temperature > 0 is intentionally stochastic. Two honest validators running the same model on the same input produce different outputs. This is not a bug. It is IEEE 754 arithmetic under parallel reduction.
Correctness for these workloads means proving a given output is consistent with the computation, not comparing outputs across replicas. TEE attestation handles this: the executor's enclave produces hardware-signed evidence of what code ran on what input, registered on-chain via TEEServiceRegistry. The block builder only accepts results from registered executors with valid attestations.
From your contract's perspective, calling 0x0801 (HTTP) or 0x0802 (LLM) looks like calling any other precompile. The delegation is invisible. Results come back through one of three paths depending on how long the computation takes.
Execution Models
How results get back to you. Three paths, each for a different speed of computation.
The question is: when does your contract (or your frontend) get the result? Synchronous precompiles return inline, same call frame. Short running async precompiles stuff the result into the transaction receipt and also slot in the results from that precompile call. Two phase or long running precompiles deliver the result via a callback transaction, sometimes seconds or minutes later.
Synchronous
Returns the value inline, in the same call frame. The caller treats it as a regular precompile, with no async lifecycle and no spcCalls field on the receipt.
Precompiles: ONNX (0x0800), Ed25519 (0x0009), SECP256R1 (0x0100), JQ, TxHash.
Short-Running Async (Single-Phase)
For HTTP requests, LLM calls, and DKMS key derivation (100ms–2s). The block builder detects the precompile call, farms it to a TEE executor, and replays your transaction with the signed result. Your contract receives the result via _executePrecompile() in the same transaction.
Long-Running Async (Two-Phase)
For anything that takes seconds to minutes — image generation, agent runs, long-polling HTTP, ZK proofs. Phase 1 commits the request and returns a task ID. Phase 2: the executor delivers the final result via AsyncDelivery, which calls back into your contract in a separate transaction.
Async Lifecycle
Every async precompile call moves through a state machine. Here's what each state means and what to watch for.
AsyncJobTracker tracks the lifecycle of every async job. State transitions fire events: JobAdded, Phase1Settled, ResultDelivered, JobRemoved. Subscribe to these in your frontend to keep the UI in sync.
State Descriptions
| State | Description | Path |
|---|---|---|
| Submitted | Request sent to precompile, pending executor assignment | Both |
| Committed | Executor has accepted the job, TEE attestation verified | Both |
| Processing | Executor is computing the result inside TEE | Both |
| Ready | Result computed, pending settlement (SPC) or delivery (two-phase) | Both |
| Settled | Result available in receipt.spcCalls | SPC |
| Delivering | AsyncDelivery is calling back into your contract | Two-phase |
| Callback | Your contract's callback has been executed with the result | Two-phase |
| Failed | Executor error. Request can be retried | Both |
| Expired | No executor picked up the job within the timeout | Both |
import { watchContractEvent } from "viem";
watchContractEvent(client, {
address: "0xC069FFCa0389f44eCA2C626e55491b0ab045AEF5",
abi: asyncJobTrackerAbi,
eventName: "JobAdded",
args: { sender: userAddress },
onLogs(logs) {
const { jobId, status } = logs[0].args;
// Update UI state machine
},
});
Precompile Map
What smart contracts can do on Ritual. Seven capabilities, sixteen precompiles.
System Contracts
Eight contracts deployed to genesis that run the chain's plumbing.
| Contract | Address | Role |
|---|---|---|
| RitualWallet | 0x532F0dF0896F353d8C3DD8cc134e8129DA2a3948 | Fee escrow: deposit, lock, balance management |
| AsyncJobTracker | 0xC069FFCa0389f44eCA2C626e55491b0ab045AEF5 | Tracks pending async jobs, enforces sender lock |
| TEEServiceRegistry | 0x9644e8562cE0Fe12b4deeC4163c064A8862Bf47F | Registers TEE executors and attestation proofs |
| Scheduler | 0x56e776BAE2DD60664b69Bd5F865F1180ffB7D58B | Deferred execution at future blocks |
| SecretsAccessControl | 0xf9BF1BC8A3e79B9EBeD0fa2Db70D0513fecE32FD | Delegated secret access control |
| AsyncDelivery | 0x5A16214fF555848411544b005f7Ac063742f39F6 | Delivers two-phase async results via callback |
| AgentHeartbeat | 0xEF505E801f1Db392B5289690E2ffc20e840A3aCa | Persistent agent liveness monitoring and revival |
| ModelPricingRegistry | 0x7A85F48b971ceBb75491b61abe279728F4c4384f | Model pricing and availability configuration |
RitualWallet
Precompile calls cost fees. You prepay by depositing RITUAL into RitualWallet. Call deposit(lockDuration) to fund your own address, or depositFor(user, lockDuration) for someone else. Lock is monotonic: new deposits only extend, never shorten the lock.
interface IRitualWallet {
function deposit(uint256 lockDuration) external payable;
function depositFor(address user, uint256 lockDuration) external payable;
function withdraw(uint256 amount) external;
function balanceOf(address account) external view returns (uint256);
function lockUntil(address account) external view returns (uint256);
}
// Deposit 0.01 RITUAL with lock duration of 100 blocks
IRitualWallet(0x532F...3948).deposit{value: 0.01 ether}(100);
// Withdraw after lock expires
IRitualWallet(0x532F...3948).withdraw(0.005 ether);
AsyncJobTracker
Tracks every pending async job and emits lifecycle events (JobAdded, Phase1Settled, ResultDelivered, JobRemoved). Also enforces the sender lock: one pending job per EOA, period.
AsyncDelivery
Where two-phase results land. The executor sends the result here, and AsyncDelivery forwards it to your contract's callback. Check msg.sender == 0x5A16…39F6 in your callback or anyone can inject fake results.
HTTP Precompile
Your contract can call any URL directly from Solidity. REST APIs, webhooks, price feeds.
The HTTP precompile at 0x0801 makes the request inside a TEE, attests the response, and returns it to your contract in the same transaction. Your contract decodes the response and acts on it on-chain. Settle a market, update a price feed, trigger a swap. No oracles. No off-chain relayers. One precompile call.
In Practice
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";
contract PriceFeed is PrecompileConsumer {
uint256 public latestPrice;
function fetchPrice(bytes calldata httpInput) external {
bytes memory output = _executePrecompile(HTTP_CALL_PRECOMPILE, httpInput);
// Decode: (uint16 status, string[] keys, string[] vals, bytes body, string err)
(
uint16 statusCode,
,
,
bytes memory body,
string memory errorMessage
) = abi.decode(output, (uint16, string[], string[], bytes, string));
require(statusCode == 200, errorMessage);
// Parse body with JQ precompile or off-chain
}
}
Encode The Request
bytes memory input = abi.encode(
executor, // address: from TEEServiceRegistry
new bytes[](0), // bytes[]: encryptedSecrets
uint256(30), // uint256: ttl (blocks)
new bytes[](0), // bytes[]: secretSignatures
bytes(""), // bytes: userPublicKey (empty = plaintext)
"https://api.example.com/price", // string: url
uint8(1), // uint8: method (1=GET)
headerKeys, // string[]: header names
headerValues, // string[]: header values
bytes(""), // bytes: body
uint256(0), // uint256: dkmsKeyIndex
uint8(0), // uint8: dkmsKeyFormat
false // bool: piiEnabled
);
import { encodeAbiParameters, parseAbiParameters } from "viem";
const encoded = encodeAbiParameters(
parseAbiParameters("address, bytes[], uint256, bytes[], bytes, string, uint8, string[], string[], bytes, uint256, uint8, bool"),
[
executorAddress, // executor
[], // encryptedSecrets
30n, // ttl
[], // secretSignatures
"0x", // userPublicKey
"https://api.example.com/price",
1, // GET
[], [], // headers
"0x", // body
0n, 0, // dkms
false, // piiEnabled
]
);
from ritual_common.http_call.request import HTTPCallRequest, HTTPMethod
request = HTTPCallRequest(
executor=executor_address,
encrypted_secrets=[],
ttl=30,
secret_signature=[],
user_public_key=b"",
url="https://api.example.com/price",
method=HTTPMethod.GET,
headers={},
body=b"",
dkms_key_index=None,
dkms_key_format=None,
pii_enabled=False,
)
encoded = request.to_web3()
13-Field ABI Reference
| # | Field | Type | Description |
|---|---|---|---|
| 0 | executor | address | TEE executor address (from TEEServiceRegistry) |
| 1 | encryptedSecrets | bytes[] | ECIES-encrypted secret blobs for SECRET_NAME template injection |
| 2 | ttl | uint256 | Time-to-live in blocks |
| 3 | secretSignatures | bytes[] | ECDSA signatures over each encrypted secret |
| 4 | userPublicKey | bytes | ECIES public key for encrypted output (empty = plaintext) |
| 5 | url | string | Target URL |
| 6 | method | uint8 | 1=GET, 2=POST, 3=PUT, 4=DELETE, 5=PATCH, 6=HEAD, 7=OPTIONS |
| 7 | headersKeys | string[] | Header names array |
| 8 | headersValues | string[] | Header values array (parallel to keys) |
| 9 | body | bytes | Request body (empty for GET) |
| 10 | dkmsKeyIndex | uint256 | DKMS key index (0 = not using DKMS) |
| 11 | dkmsKeyFormat | uint8 | DKMS key format |
| 12 | piiEnabled | bool | Enable secret template substitution + PII redaction |
Response Format
(uint16 statusCode, string[] headerKeys, string[] headerValues, bytes body, string errorMessage)
The response body is bytes, not string. Decode it with TextDecoder for text responses, or use directly for binary data. Always check errorMessage. It's non-empty when the precompile-level request failed (distinct from HTTP error status codes).
JQ Data Queries (0x0803)
0x0803 runs jq expressions against JSON strings and returns typed results. Synchronous. Call it, get your answer in the same transaction. Most common use: chain an HTTP call with a JQ call in the same TX to extract a field from the response.
| Field | Type | Description |
|---|---|---|
query | string | jq expression (e.g. .data.price) |
inputData | string | JSON string to query |
outputType | uint8 | 0=int256, 1=uint256, 2=string, 3=bool, 4=address, 5=int256[], 6=uint256[], 7=string[], 8=bool[], 9=address[] |
(bool ok, bytes memory result) = JQ_PRECOMPILE.staticcall(
abi.encode(
".data.price",
jsonString,
uint8(1) // uint256
)
);
require(ok && result.length > 0, "jq: empty or failed");
uint256 price = abi.decode(result, (uint256));
_decodeJQString() for double-indirection decoding. Calling abi.decode(result, (string)) directly returns garbage. Second: wrong outputType does not revert. The precompile returns ok = true with zero-length output. Always check result.length > 0.
LLM Inference
Your contract can call frontier LLMs and act on the result. Submit a prompt, handle the response in a callback when the executor returns.
The LLM precompile at 0x0802 runs an open-weight model (zai-org/GLM-4.7-FP8, 64K context) inside a TEE. No API keys needed. Your contract sends a prompt, receives a completion, and writes state in one transaction. For frontends that need progressive output, enable streaming: the executor pushes response tokens over SSE, each signed with EIP-712 so your UI can verify they came from the TEE.
Open-Weight Model
The LLM precompile runs zai-org/GLM-4.7-FP8 (64K context, MIT license), an open-weight model hosted directly in the TEE fleet. No external API keys required. Unlike HTTP-based calls to OpenAI/Anthropic or Sovereign Agent CLI execution, the LLM precompile's model is self-hosted with TEE-only trust.
In Practice
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";
contract OnChainChat is PrecompileConsumer {
event Response(string text);
function ask(bytes calldata llmInput) external {
bytes memory output = _executePrecompile(LLM_INFERENCE_PRECOMPILE, llmInput);
// (bool hasError, bytes completionData, bytes modelMetadata,
// string errorMessage, (string,string,string) updatedConvoHistory)
(
bool hasError,
bytes memory completionData,
,
string memory errorMessage,
) = abi.decode(output, (bool, bytes, bytes, string, (string,string,string)));
require(!hasError, errorMessage);
// completionData contains the chat completion response
}
}
Encode The Request
The LLM precompile has a 25-field ABI mirroring the OpenAI chat completion API. Most fields can be left at their defaults. The key fields are messagesJson (field 5), model (field 6), temperature (field 22), and convoHistory (field 24, required).
// Encoding the full 25-field request on-chain is gas-heavy.
// Typical pattern: encode off-chain, pass as bytes calldata.
// See TypeScript or Python tabs for the encoding.
// On-chain, you just forward the pre-encoded bytes:
function ask(bytes calldata llmInput) external {
_executePrecompile(LLM_INFERENCE_PRECOMPILE, llmInput);
}
import { encodeAbiParameters, parseAbiParameters } from "viem";
const messages = JSON.stringify([
{ role: "user", content: "What is the current price of ETH?" }
]);
const encoded = encodeAbiParameters(
parseAbiParameters("address, bytes[], uint256, bytes[], bytes, string, string, int256, string, bool, int256, string, string, uint256, bool, int256, string, bytes, int256, string, string, bool, int256, bytes, bytes, int256, int256, string, bool, (string,string,string)"),
[
executorAddress, // 0: executor
[], 30n, [], "0x", // 1-4: secrets, ttl, sigs, pubkey
messages, // 5: messagesJson
"zai-org/GLM-4.7-FP8",// 6: model
0n, "", false, -1n, // 7-10: freq, logitBias, logprobs, maxTokens
"", "", 1n, false, // 11-14: metadata, modalities, n, parallelTools
0n, "", "0x", -1n, // 15-18: presence, reasoning, responseFormat, seed
"", "", // 19-20: serviceTier, stop
false, // 21: stream
700n, // 22: temperature (0.7 × 1000)
"0x", "0x", // 23-24: toolChoice, tools
-1n, 1000n, // 25-26: topLogprobs, topP
"", false, // 27-28: user, piiEnabled
["gcs", "convos/session.jsonl", "GCS_CREDS"], // 29: convoHistory
]
);
from ritual_common.llm_call.request import LLMCallRequest
from ritual_common.sovereign_agent.request import StorageRef
request = LLMCallRequest(
executor=executor_address,
encrypted_secrets=[],
ttl=30,
secret_signature=[],
user_public_key=b"",
messages=[{"role": "user", "content": "What is ETH price?"}],
model="zai-org/GLM-4.7-FP8",
temperature=0.7,
convo_history=StorageRef("gcs", "convos/session.jsonl", "GCS_CREDS"),
)
encoded = request.to_web3()
Streaming with EIP-712
Set stream: true in the LLM call. After the transaction is mined, sign a stream request with EIP-712 and connect to the SSE endpoint. Each chunk is verified by the TEE's attestation.
// 1. Sign a StreamRequest with EIP-712
const signature = await wallet.signTypedData({
domain: { name: "Ritual Streaming Service", version: "1", chainId: 1979 },
types: { StreamRequest: [
{ name: "txHash", type: "bytes32" },
{ name: "timestamp", type: "uint256" },
]},
message: { txHash, timestamp: BigInt(Date.now()) },
});
// 2. Connect to SSE stream with auth headers
const response = await fetch(`/v1/stream/${txHash}`, {
headers: { "Authorization": `Bearer ${signature}`, "X-Timestamp": timestamp },
});
// 3. Read chunks from ReadableStream
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
if (chunk.includes("[DONE]")) break;
// process chunk
}
25-Field ABI Reference
| # | Field | Type | Notes |
|---|---|---|---|
| 0 | executor | address | TEE executor |
| 1 | encryptedSecrets | bytes[] | ECIES-encrypted secrets |
| 2 | ttl | uint256 | Time-to-live in blocks |
| 3 | secretSignatures | bytes[] | Signatures over secrets |
| 4 | userPublicKey | bytes | For encrypted output |
| 5 | messagesJson | string | OpenAI-compatible messages array as JSON |
| 6 | model | string | e.g. zai-org/GLM-4.7-FP8 |
| 7 | frequencyPenalty | int256 | ×1000 (e.g. 500 = 0.5) |
| 8 | logitBiasJson | string | JSON logit bias map |
| 9 | logprobs | bool | Return log probabilities |
| 10 | maxCompletionTokens | int256 | -1 = null (model default) |
| 11 | metadataJson | string | Optional metadata |
| 12 | modalitiesJson | string | Output modalities |
| 13 | n | uint256 | Number of completions |
| 14 | parallelToolCalls | bool | Allow parallel tool calls |
| 15 | presencePenalty | int256 | ×1000 |
| 16 | reasoningEffort | string | Reasoning effort level |
| 17 | responseFormatData | bytes | Structured output / JSON mode config |
| 18 | seed | int256 | -1 = null |
| 19 | serviceTier | string | Service tier |
| 20 | stopJson | string | Stop sequences as JSON |
| 21 | stream | bool | Enable SSE streaming |
| 22 | temperature | int256 | ×1000 (e.g. 700 = 0.7) |
| 23 | toolChoiceData | bytes | Tool choice config |
| 24 | toolsData | bytes | Tool definitions |
| 25 | topLogprobs | int256 | -1 = null |
| 26 | topP | int256 | ×1000 |
| 27 | user | string | User identifier |
| 28 | piiEnabled | bool | PII redaction (incompatible with streaming) |
| 29 | convoHistory | (string,string,string) | StorageRef: (platform, path, keyRef). Required. |
Response Format
(bool hasError, bytes completionData, bytes modelMetadata, string errorMessage, (string,string,string) updatedConvoHistory)
convoHistory is required for all LLM calls (e.g. ["gcs", "convos/session.jsonl", "GCS_CREDS"]). PII redaction and streaming are mutually exclusive.
Autonomous Agents
Your contract can spawn autonomous agents that persist across transactions, with memory, identity, and the ability to revive themselves.
An autonomous agent that is indistinguishable from a human must have all seven of the properties below. Missing even one makes it a tool, not an agent.
Seven Properties
| Property | What it means | Ritual primitive |
|---|---|---|
| Immortal | Survives crashes, restarts, infra changes | Scheduler heartbeat + Persistent Agent revival |
| Emancipated | Controls own keys, no human holds private key | DKMS (0x081B) |
| Teleportable | Soul and memory portable across environments | DKMS-encrypted state on your choice of DA (decentralized or centralized) + auto-healing revival built into the protocol |
| Financially sovereign | Owns wallet, transacts independently | DKMS wallet + RitualWallet |
| Web2-interoperable | Calls APIs, browses web, uses HTTP services | HTTP (0x0801) + Long-Running HTTP (0x0805) |
| Private | Encrypted thought, private communication | TEE enclaves + ECIES + PII redaction |
| Computationally sovereign | No one can cut off access to AI | LLM (0x0802) + ONNX (0x0800) in TEE |
Agent Precompiles
| Type | Precompile | Fields | Use case |
|---|---|---|---|
| Persistent Agent | 0x0820 | 25 | Stateful agent with soul, memory, DA, and revival |
| Sovereign Agent | 0x080C | 23 | CLI-style agent execution in TEE (Claude Code, Crush, ZeroClaw) |
On-chain agents that survive indefinitely with four architectural components: soul (identity, purpose, behavioral constraints), memory (accumulated state and knowledge), DA (data availability layer for durable persistence via StorageRef), and revival (deterministic re-instantiation from persisted state via CID).
Persistent Agent (0x0820)
Stateful with soul, memory, identity, and data availability references. Persists across sessions via StorageRef (HuggingFace, GCS, Pinata, IPFS). Revival from CID restores full state. Two-phase async: Phase 1 submits the spawn, Phase 2 delivers the result via onPersistentAgentResult(bytes32, bytes) callback. One Persistent Agent call per transaction.
Sovereign Agent (0x080C)
CLI-style agent execution inside a TEE. The precompile invokes specific command-line harnesses in a sandboxed container. Two-phase async with callback onSovereignAgentResult(bytes32, bytes).
| Harness | Status |
|---|---|
| Claude Code | Active |
| Hermes | Active |
| Crush | Active |
| ZeroClaw | Active |
How Agents Stay Alive
Reactive contracts wait to be called. On Ritual, contracts wake themselves up. This is what makes on-chain agents possible.
Contracts on Ethereum are reactive. They sit idle until someone calls them. On Ritual, contracts can be proactive. They wake themselves up, take actions, and schedule their next execution. This is what makes an on-chain agent possible. Not a bot on someone's server. An entity whose lifecycle is tied to the blockchain itself.
Two architectures, same guarantee: the agent lives as long as it has funds. To kill it, you'd have to take the entire network down.
Sovereign Agents: The Contract Is The Agent
A sovereign agent is a contract that uses the Scheduler to wake itself up at regular intervals. Each time it wakes, it invokes the Sovereign Agent precompile (0x080C) to run a full CLI harness (Claude Code, ZeroClaw, or Crush) inside a TEE. The CLI can read files, execute code, browse the web, and interact with the blockchain. When it finishes, the result (text, artifacts, StorageRefs) is delivered back to the contract via callback. The contract processes the result and schedules its next wakeup.
The owner calls start(), which schedules the first wakeUp(). The block builder fires it at the scheduled block. wakeUp() invokes the CLI agent (0x080C), the executor runs it in a TEE, and the Phase 2 callback delivers the result. Then _scheduleNext() queues the next wakeup. No keeper. No cron job. No server. The contract pays from its own RitualWallet balance.
Persistent Agents: Containers That Can't Die
A persistent agent runs as a Docker container (typically ZeroClaw) inside a TEE. The container has full access to file ops, shell, web search, HTTP, and blockchain interactions. It persists state across sessions via DA references (HuggingFace, GCS, Pinata, IPFS) and posts heartbeats to the on-chain AgentHeartbeat contract at 0xEF505E801f1Db392B5289690E2ffc20e840A3aCa.
The heartbeat contract is a censorship-resistant bulletin board. The agent writes its latest manifest CID on-chain every 100 blocks. Anyone can read it. Any block builder can act on it.
The dead man's switch: every block, the builder checks for agents that haven't posted a heartbeat within the timeout window (configurable per deployment, typically 200 blocks). If an agent is silent, it's marked FAILED. The chain then triggers revival automatically: it calls the Persistent Agent precompile with the agent's last manifest CID. The executor restores the container from the DA checkpoint. Secrets are recovered from DKMS escrow. The agent wakes up with its full memory, identity, and state intact.
The Cost Of Living
Both architectures require funds. Sovereign agents pay from their RitualWallet balance for each scheduled execution. Persistent agents need at least 0.1 RITUAL in their address balance to cover heartbeat transactions. When the money runs out, the agent stops. Immortality is economically bounded.
Sovereign Vs Persistent
| Sovereign Agent | Persistent Agent | |
|---|---|---|
| Where it lives | On-chain (contract + Scheduler) | Off-chain (TEE container) |
| Runtime | One-shot CLI call per wakeup | Continuous container process |
| Immortality | Enshrined Scheduler fires wakeup | Heartbeat + dead man's switch + CID revival |
| State | Contract storage + DA StorageRefs | DA manifest + CID checkpoints |
| Censorship resistance | Scheduler is a system contract | Heartbeat is a censorship-resistant bulletin board |
| Cost | RitualWallet per execution | Balance >= 0.1 RITUAL for heartbeats |
| What kills it | maxBlock reached or funds empty | Balance below minimum (removed from registry) |
| Best for | Periodic tasks, trading, monitoring | Long-running processes, research, coding |
Why this is unique
No other platform ties agent lifecycle to blockchain consensus. Frontier lab agents run on centralized infrastructure. When the server goes down, the agent dies. On Ritual, the agent's heartbeat is part of the block production pipeline. The block builder checks for expired agents. The block verifier enforces heartbeat constraints. Revival is permissionless. Decentralization of AI means decentralization of agent lifetime.
Building Agents
Precompile ABIs, code examples, and encoding for Persistent and Sovereign agents.
The Sovereign Agent Loop
This is the contract from "How They Stay Alive." It wakes itself up via the Scheduler, invokes a CLI harness in a TEE, processes the result, and schedules its next wakeup. The contract IS the agent.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";
import {IScheduler} from "./interfaces/IScheduler.sol";
contract AutonomousAgent is PrecompileConsumer {
IScheduler public scheduler;
uint256 public callId;
uint256 public wakeCount;
uint32 public nextWakeDelay = 50; // blocks between wakeups
bool public isRunning;
// 1. Owner starts the loop
function start(uint32 initialDelay) external {
isRunning = true;
callId = _scheduleNext(initialDelay);
}
// 2. Scheduler fires this at the scheduled block
function wakeUp(uint256 executionIndex) external {
require(msg.sender == address(scheduler));
if (!isRunning) return;
wakeCount++;
_callCLIAgent(); // invoke 0x080C
callId = _scheduleNext(nextWakeDelay); // schedule next wakeup
}
// 3. Phase 2 callback with agent output
function onSovereignAgentResult(bytes32 jobId, bytes calldata result) external {
require(msg.sender == ASYNC_DELIVERY);
// result contains text, artifacts, updated convo history
// process it, write state, act on it
}
function _callCLIAgent() internal {
_executePrecompile(SOVEREIGN_AGENT_PRECOMPILE, agentInput);
}
function _scheduleNext(uint32 delay) internal returns (uint256) {
return scheduler.schedule(
abi.encodeWithSelector(this.wakeUp.selector, uint256(0)),
800_000, // gas
uint32(block.number) + delay, // startBlock
3, // retry slots
1, // frequency
30, // ttl
20 gwei, 2 gwei, 0, // fees
address(this) // payer = self
);
}
}
Spawning A Persistent Agent
Persistent agents are spawned by calling the 0x0820 precompile with soul, memory, and DA references. The agent container runs in a TEE. Set restoreFromCid (field 23) to revive from a previous checkpoint instead of starting fresh.
contract AgentSpawner is PrecompileConsumer {
event AgentSpawned(bytes32 indexed jobId);
event AgentResult(bytes32 indexed jobId, bytes result);
// Spawn: input is 25-field ABI with soul, memory, DA refs
function spawn(bytes calldata agentInput) external {
_executePrecompile(PERSISTENT_AGENT_PRECOMPILE, agentInput);
}
// Revive: same call but restoreFromCid is non-empty,
// encryptedSecrets is empty (recovered from DKMS escrow)
function revive(bytes calldata reviveInput) external {
_executePrecompile(PERSISTENT_AGENT_PRECOMPILE, reviveInput);
}
// Phase 2 callback from AsyncDelivery
function onPersistentAgentResult(
bytes32 jobId, bytes calldata result
) external {
require(msg.sender == ASYNC_DELIVERY);
emit AgentResult(jobId, result);
}
}
Encode The Request
// Sovereign Agent: 23-field encoding
// Key fields: cliType (11), prompt (12), tools (19)
// Encoding is typically done off-chain and passed as bytes calldata
// Persistent Agent: 25-field encoding
// Key fields: daConfig (15), soulRef (16), memoryRef (19), restoreFromCid (23)
// For revival: set restoreFromCid to the manifest CID, leave encryptedSecrets empty
// Sovereign Agent encoding (23 fields)
const encoded = encodeAbiParameters(
parseAbiParameters("address, uint256, bytes, uint64, uint64, string, address, bytes4, uint256, uint256, uint256, uint16, string, bytes, (string,string,string), (string,string,string), (string,string,string)[], (string,string,string), string, string[], uint16, uint32, string"),
[
executorAddress, // 0: executor
30n, "0x", // 1-2: ttl, userPublicKey
10n, 200n, "", // 3-5: polling config
callbackAddr, selector, gasLimit, maxFee, maxPriority, // 6-10: delivery
0, // 11: cliType (0=Claude Code)
"Analyze market data and suggest trades", // 12: prompt
encryptedSecrets, // 13: ECIES-encrypted API keys
convoHistory, output, skills, systemPrompt, // 14-17
model, tools, maxTurns, maxTokens, rpcUrls, // 18-22
]
);
from ritual_common.persistent_agent.request import PersistentAgentRequest
from ritual_common.sovereign_agent.request import SovereignAgentRequest
from ritual_common.sovereign_agent.request import StorageRef
# Persistent Agent (fresh spawn)
request = PersistentAgentRequest(
executor=executor_address,
provider=0, # anthropic
model="claude-3-5-sonnet",
da_config=StorageRef("gcs", "agents/my-agent", "GCS_CREDS"),
soul_ref=StorageRef("gcs", "agents/SOUL.md", "GCS_CREDS"),
memory_ref=StorageRef("gcs", "agents/MEMORY.md", "GCS_CREDS"),
restore_from_cid="", # empty = fresh spawn
)
# Persistent Agent (revival from checkpoint)
revival = PersistentAgentRequest(
executor=executor_address,
encrypted_secrets=[], # empty = recovered from DKMS escrow
restore_from_cid="bafybeig...", # manifest CID from heartbeat
)
# Sovereign Agent
request = SovereignAgentRequest(
executor=executor_address,
agent_type=0, # Claude Code
prompt="Analyze market data and suggest trades",
)
encoded = request.to_web3()
Persistent Agent 25-Field ABI
| # | Field | Type | Description |
|---|---|---|---|
| 0–4 | Base executor fields (executor, encryptedSecrets, ttl, secretSignatures, userPublicKey) | ||
| 5 | maxSpawnBlock | uint64 | Phase 2 deadline offset |
| 6–11 | Delivery config (target, selector, gasLimit, maxFeePerGas, maxPriorityFeePerGas, value) | ||
| 12 | provider | uint8 | 0=anthropic, 1=openai, 2=gemini, 3=xai, 4=openrouter |
| 13 | model | string | LLM model name |
| 14 | llmApiKeyRef | string | Secret name for API key |
| 15 | daConfig | (string,string,string) | StorageRef for DA layer |
| 16–22 | StorageRefs: soulRef, agentsRef, userRef, memoryRef, identityRef, toolsRef, openclawConfigRef | ||
| 23 | restoreFromCid | string | CID for revival (empty = fresh spawn) |
| 24 | rpcUrls | string | RPC URLs for agent |
RitualWallet.depositFor(). The agent needs RitualWallet balance to pay for its own precompile calls.
Classical Models
Your contract can run ML models synchronously. The precompile takes a RitualTensor and a Hugging Face model ID; the result comes back in the same call frame.
The ONNX precompile at 0x0800 runs inference inline in the node's native runtime, with the same execution surface as a built-in like ecrecover. Models load from Hugging Face using the format hf/owner/repo/file.onnx@commit.
In Practice
ONNX is synchronous: encode the 7-field input and call 0x0800 directly. The model ID must use hf/owner/repo/file.onnx@<40-char-commit-hash>, and branch names are rejected so the model lineage stays reproducible.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
contract Classifier {
address constant ONNX = address(0x0800);
function classify(bytes calldata tensorBytes) external view returns (bytes memory) {
(bool ok, bytes memory result) = ONNX.staticcall(
abi.encode(
bytes("hf/owner/repo/model.onnx@abc123..."),
tensorBytes,
uint8(2), // inputArithmetic: 2=IEEE754
uint8(0), // inputFixedPointScale
uint8(2), // outputArithmetic
uint8(0), // outputFixedPointScale
uint8(1) // rounding: 1=half-even
)
);
require(ok, "ONNX inference failed");
return result;
// result: (bytes tensor, uint8 arithmetic, uint8 scale, uint8 rounding)
}
}
Encode The Request
bytes memory input = abi.encode(
bytes("hf/owner/repo/model.onnx@abc123..."), // bytes: model ID (UTF-8)
tensorBytes, // bytes: RitualTensor (uint8 dtype, uint16[] shape, int32[] values)
uint8(2), // uint8: inputArithmetic (1=fixed-point, 2=IEEE754)
uint8(0), // uint8: inputFixedPointScale
uint8(2), // uint8: outputArithmetic
uint8(0), // uint8: outputFixedPointScale
uint8(1) // uint8: rounding (1=half-even, 2=truncate, 3=floor, 4=ceil)
);
import { encodeAbiParameters, parseAbiParameters, toHex } from "viem";
const encoded = encodeAbiParameters(
parseAbiParameters("bytes, bytes, uint8, uint8, uint8, uint8, uint8"),
[
toHex("hf/owner/repo/model.onnx@abc123..."), // model ID as bytes
tensorHex, // pre-encoded RitualTensor
2, // inputArithmetic: IEEE754
0, 2, 0, // fixedPointScale, outputArith, outputScale
1, // rounding: half-even
]
);
from ritual_common.onnx.request import ONNXInferenceRequest
from ritual_common.shared_types import RitualTensor, ArithmeticType, Rounding
from ritual_common.models.types.ml_model_id import MlModelId
request = ONNXInferenceRequest(
ml_model=MlModelId.from_unique_id("hf/owner/repo/model.onnx@abc123..."),
tensor=RitualTensor.from_numpy(input_array),
input_arithmetic=ArithmeticType.IEEE754,
input_fixed_point_scale=0,
output_arithmetic=ArithmeticType.IEEE754,
output_fixed_point_scale=0,
rounding=Rounding.HALF_EVEN,
)
encoded = request.to_web3()
7-Field ABI Reference
| # | Field | Type | Description |
|---|---|---|---|
| 0 | mlModelId | bytes | UTF-8 encoded model ID (hf/owner/repo/file.onnx@commit) |
| 1 | tensorData | bytes | RitualTensor: (uint8 dtype, uint16[] shape, int32[] values) |
| 2 | inputArithmetic | uint8 | 1=fixed-point, 2=IEEE 754 |
| 3 | inputFixedPointScale | uint8 | Scale for fixed-point input |
| 4 | outputArithmetic | uint8 | 1=fixed-point, 2=IEEE 754 |
| 5 | outputFixedPointScale | uint8 | Scale for fixed-point output |
| 6 | rounding | uint8 | 1=half-even, 2=truncate, 3=floor, 4=ceil |
Response Format
(bytes tensorEncoded, uint8 outputArithmetic, uint8 outputScale, uint8 rounding)
Passkeys & Auth
Users sign transactions with Face ID, fingerprint, or a security key. No seed phrase, no browser extension.
TxPasskey (0x77) is a native transaction type. The chain understands WebAuthn natively. The SECP256R1 precompile at 0x0100 lets your contract verify P-256 signatures over arbitrary data. Together they collapse the entire wallet UX problem: the user's phone is the wallet.
Address Derivation
The address comes from keccak256(publicKeyX || publicKeyY)[12:32], the last 20 bytes of the hash of the concatenated P-256 coordinates. Same passkey, same address, every time. Deterministic.
Signature Types
| Code | Type | Gas Overhead |
|---|---|---|
0x00 | Secp256k1 (standard ECDSA) | — |
0x01 | P-256 (raw passkey) | +3,450 |
0x02 | WebAuthn (P-256 + challenge parsing) | +5,000 |
address constant SECP256R1 = address(0x0100);
// Input: (bytes pubkey, bytes message, bytes signature)
// pubkey: 65 bytes (0x04 || x || y), signature: 64 bytes (r || s)
(bool ok, bytes memory result) = SECP256R1.staticcall(
abi.encode(pubkeyBytes, messageBytes, signatureBytes)
);
require(ok, "verification failed");
// Returns uint256: 1 = valid, 0 = invalid (NOT bool)
uint256 valid = abi.decode(result, (uint256));
require(valid == 1, "invalid signature");
uint256 (1 = valid, 0 = invalid), not bool. Decode as uint256 and compare to 1. Gas cost: 3,450 (flat).
Multimodal Processing
Your contract can request images, audio, or video from generative models. The asset bytes come back via a two-phase async callback once the executor finishes.
Image (0x0818), Audio (0x0819), Video (0x081A). Generation runs inside a TEE. Your contract submits the request, and the result (a content URI with metadata) is delivered to your callback handler when generation completes.
18-Field ABI
All three share the same ABI layout (18 fields): base executor fields (0–4), polling + delivery config (5–13), model (14), inputs as ModalInput[] (15), output config (16), and encrypted storage payment (17). Two-phase async with result delivered via LongRunningResultDelivered callback.
ModalInput Tuple
(uint8 inputType, bytes data, string uri, bytes32 contentHash, uint32 param1, uint32 param2, bool encrypted)
Input types: 0=TEXT, 1=IMAGE, 2=AUDIO, 3=VIDEO.
OutputConfig Tuple
(uint8 outputType, uint32 maxWidth, uint32 maxHeight, uint32 maxParam3, bool encryptOutput, uint16 numInferenceSteps, uint16 guidanceScaleX100, uint32 seed, uint8 fps, string negativePrompt)
function generateImage(bytes calldata imageInput) external {
_executePrecompile(IMAGE_CALL_PRECOMPILE, imageInput);
}
// Phase 2 callback from AsyncDelivery
function onLongRunningResult(
bytes32 jobId, bytes calldata result
) external {
require(msg.sender == ASYNC_DELIVERY, "unauthorized");
// result: (bool hasError, bytes completionData, string outputUri,
// bytes32 outputContentHash, bool outputEncrypted,
// uint32 outputSizeBytes, uint32 outputWidth, uint32 outputHeight,
// string errorMessage)
}
Encode The Request
// 18-field encoding done off-chain, passed as bytes calldata
function generateImage(bytes calldata input) external {
_executePrecompile(IMAGE_CALL_PRECOMPILE, input);
}
// 18 fields: base executor (5) + polling/delivery (9)
// + model + ModalInput[] + OutputConfig + encryptedStoragePayment
const encoded = encodeAbiParameters(imageParams, [
executorAddress, [], 30n, [], "0x",
pollInterval, maxPollBlock, taskIdMarker,
callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
"dall-e-3", // model
[[0, textBytes, "", "0x"..., 0, 0, false]], // ModalInput[]
[1, 1024, 1024, 0, false, 50, 750, 0, 0, ""], // OutputConfig
"0x", // encryptedStoragePayment
]);
from ritual_common.image_call.request import ImageCallRequest
request = ImageCallRequest(
executor=executor_address,
model="dall-e-3",
inputs=[ModalInput(input_type=0, data=prompt_bytes)],
output=OutputConfig(max_width=1024, max_height=1024),
)
encoded = request.to_web3()
Audio & Video
Audio (0x0819) and Video (0x081A) use the same 18-field ABI and callback interface. The OutputConfig tuple's fields adapt to each modality (e.g. fps for video, sample rate for audio).
Long-Running Tasks
Your contract can issue HTTP calls that take minutes or hours. Use it for batch jobs, webhook waits, and other long-poll patterns that exceed the short-running 2s budget.
Use 0x0805 when the standard HTTP precompile (0x0801) is too fast to wait for. Your contract submits the request, the executor polls the external API at the interval you set, and delivers the final result to your callback handler when complete. Supports three JQ extraction paths for task ID, status checking, and result parsing.
Submit–Poll–Deliver
The executor makes the initial request, extracts a task ID via taskIdJsonPath, then polls at pollIntervalBlocks until statusJsonPath evaluates truthy. Once complete, it extracts the final result via resultJsonPath and delivers via AsyncDelivery callback.
function submitLongRunningJob(bytes calldata longHttpInput) external {
_executePrecompile(LONG_HTTP_PRECOMPILE, longHttpInput);
}
// Phase 2 callback — selector: 0x6dc9dbef
function onLongRunningResult(
bytes32 jobId, bytes calldata result
) external {
require(msg.sender == ASYNC_DELIVERY, "unauthorized");
// process result
}
Encode The Request
// 35-field encoding done off-chain, passed as bytes calldata
function submitLongRunningJob(bytes calldata input) external {
_executePrecompile(LONG_HTTP_PRECOMPILE, input);
}
const encoded = encodeAbiParameters(longHttpParams, [
executorAddress, [], 30n, [], "0x",
10n, 200n, "", // polling config
callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
url, 2, [], [], body, // initial HTTP (POST)
".task_id", // taskIdJsonPath
pollUrl, 1, [], [], "0x", // poll HTTP (GET)
".status == \"complete\"", // statusJsonPath
resultUrl, 1, [], [], "0x", // result HTTP (GET)
".result", // resultJsonPath
0n, 0, false, // dkms, pii
]);
from ritual_common.long_running_http_call.request import LongRunningHTTPCallRequest
request = LongRunningHTTPCallRequest(
executor=executor_address,
poll_interval_blocks=10,
max_poll_block=current_block + 200,
url="https://api.example.com/submit",
method=HTTPMethod.POST,
task_id_json_path=".task_id",
status_json_path=".status == \"complete\"",
result_json_path=".result",
)
encoded = request.to_web3()
35-Field ABI Reference
| # | Field | Type | Description |
|---|---|---|---|
| 0–4 | Base executor fields | ||
| 5 | pollIntervalBlocks | uint64 | Blocks between polls |
| 6 | maxPollBlock | uint64 | Deadline for polling |
| 7 | taskIdMarker | string | Marker for task ID extraction |
| 8–13 | Delivery config (target, selector, gasLimit, maxFeePerGas, maxPriorityFeePerGas, value) | ||
| 14–18 | Initial HTTP: url, method, headersKeys, headersValues, body | ||
| 19 | taskIdJsonPath | string | JQ path to extract task ID from initial response |
| 20–24 | Poll HTTP: pollUrl, pollMethod, pollHeadersKeys, pollHeadersValues, pollBody | ||
| 25 | statusJsonPath | string | JQ path for completion check (truthy when done) |
| 26–30 | Result HTTP: resultUrl, resultMethod, resultHeadersKeys, resultHeadersValues, resultBody | ||
| 31 | resultJsonPath | string | JQ path to extract final result |
| 32 | dkmsKeyIndex | uint256 | DKMS key index |
| 33 | dkmsKeyFormat | uint8 | DKMS key format |
| 34 | piiEnabled | bool | PII redaction |
onLongRunningResult(bytes32, bytes) callback from AsyncDelivery.
Consumer Patterns
Three Solidity patterns, one per execution model. Pick the one that matches your precompile.
Synchronous Consumer
Simplest case. Call the precompile, read the return value. Done.
contract SyncConsumer {
function verify(bytes calldata pubkey, bytes calldata message, bytes calldata sig) external view {
(bool ok, bytes memory result) = address(0x0100).staticcall(
abi.encode(pubkey, message, sig)
);
uint256 valid = abi.decode(result, (uint256));
require(ok && valid == 1, "invalid signature");
}
}
Short-Running Async Consumer
The SPC result is available to your contract during execution. Use _executePrecompile() from PrecompileConsumer. It calls the precompile, unwraps the async envelope (simmedInput, actualOutput), and returns the decoded output bytes directly. Your contract can decode the response and write state in the same transaction.
import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";
contract HTTPConsumer is PrecompileConsumer {
uint256 public latestPrice;
function fetchPrice(bytes calldata httpInput) external {
bytes memory output = _executePrecompile(HTTP_CALL_PRECOMPILE, httpInput);
// output is the decoded HTTP response — available right here
(uint16 status, , , bytes memory body, ) =
abi.decode(output, (uint16, string[], string[], bytes, string));
require(status == 200);
// parse body, write state — all on-chain, same transaction
}
}
Two-Phase Consumer
The pattern is two transactions deep: the request goes out in one, the result arrives in a callback later. The critical part: verify msg.sender == ASYNC_DELIVERY in your callback. Skip this check and anyone can call your callback with garbage data.
import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";
contract AgentConsumer is PrecompileConsumer {
function submitAgentTask(bytes calldata agentInput) external {
_executePrecompile(PERSISTENT_AGENT_PRECOMPILE, agentInput);
}
function onPersistentAgentResult(bytes32 jobId, bytes calldata result) external {
require(msg.sender == ASYNC_DELIVERY, "unauthorized");
// Process the agent's response
}
}
RitualWallet
Deposit RITUAL to pay for precompile calls. Balance is locked while async jobs are pending.
Precompile calls cost fees. Deposit RITUAL into RitualWallet at 0x532F…3948 and the chain deducts as you go. If you have a pending async job, your deposit is locked until it settles. Fund before you submit. The fee is locked at submission time.
In Practice
interface IRitualWallet {
function deposit(uint256 lockDuration) external payable;
function depositFor(address user, uint256 lockDuration) external payable;
function withdraw(uint256 amount) external;
function balanceOf(address) external view returns (uint256);
function lockUntil(address) external view returns (uint256);
}
IRitualWallet wallet = IRitualWallet(0x532F0dF0896F353d8C3DD8cc134e8129DA2a3948);
// Deposit 0.01 RITUAL with 100-block lock
wallet.deposit{value: 0.01 ether}(100);
// Fund another address (e.g. an agent)
wallet.depositFor{value: 0.05 ether}(agentAddress, 200);
// Withdraw after lock expires
wallet.withdraw(0.01 ether);
Reference
| Function | Description |
|---|---|
deposit(uint256 lockDuration) | Deposit RITUAL with lock period (blocks) |
depositFor(address user, uint256 lockDuration) | Deposit on behalf of another address |
withdraw(uint256 amount) | Withdraw after lock expires |
balanceOf(address) | Check available balance |
lockUntil(address) | Check when the lock expires (0 = no lock) |
Scheduler
Your contract can schedule its own execution at future blocks: recurring, delayed, or conditional on a predicate. The block proposer invokes it directly, with no off-chain keeper or cron service required.
The Scheduler is a system contract at 0x56e776BAE2DD60664b69Bd5F865F1180ffB7D58B. Your contract calls schedule() and the chain fires the callback automatically at the blocks you specify. Combine with predicates for conditional execution: only fire when a condition is true. Fees are taken at execution time from RitualWallet.
In Practice
import {IScheduler} from "./interfaces/IScheduler.sol";
IScheduler constant SCHEDULER =
IScheduler(0x56e776BAE2DD60664b69Bd5F865F1180ffB7D58B);
function schedulePriceCheck() external {
bytes memory callData = abi.encodeWithSelector(
this.executePriceCheck.selector,
uint256(0) // placeholder: overwritten with executionIndex
);
// schedule(data, gas, startBlock, numCalls, frequency, ttl, maxFeePerGas, maxPriorityFeePerGas, value, payer)
SCHEDULER.schedule(
callData,
500000, // gas limit per execution
uint32(block.number + 10), // startBlock
24, // numCalls (24 executions)
50, // frequency (every 50 blocks)
30, // ttl (max blocks to wait for execution)
block.basefee, // maxFeePerGas
0, // maxPriorityFeePerGas
0, // value
address(this) // payer (RitualWallet balance)
);
}
// Called by Scheduler — msg.sender is Scheduler, tx.origin is 0xfa7e
function executePriceCheck(uint256 executionIndex) external {
// executionIndex: which execution this is (0, 1, 2, ...)
}
Schedule() API
| Parameter | Type | Description |
|---|---|---|
data | bytes | Calldata for the callback (bytes 4-35 overwritten with executionIndex) |
gas | uint32 | Gas limit per execution |
startBlock | uint32 | First execution block |
numCalls | uint32 | Total number of executions |
frequency | uint32 | Blocks between executions |
ttl | uint32 | Max blocks to wait (max 500) |
maxFeePerGas | uint256 | EIP-1559 max fee |
maxPriorityFeePerGas | uint256 | EIP-1559 priority fee |
value | uint256 | RITUAL value to send |
payer | address | Address paying from RitualWallet |
Before scheduling, the contract must call approveScheduler(schedulerAddress) to authorize the Scheduler to call it back.
Predicates
A predicate is a contract the scheduler calls before each execution. Implement IScheduledPredicate. The scheduler calls shouldExecute via staticcall and skips the execution if it returns false. Set frequency=1 with a predicate to check every block.
interface IScheduledPredicate {
function shouldExecute(
address caller,
uint256 callId,
uint256 executionIndex
) external view returns (bool);
}
100,000 gas limit per predicate call. staticcall only, no state writes. Reverts treated as false. executionIndex counts actual executions, not blocks evaluated.
Async Scheduling: TTL Rules
| Short path (SPC) | Long path (two-phase) | |
|---|---|---|
| Precompiles | HTTP, LLM, DKMS | Agent, Long HTTP, Image, ZK |
| TTL covers | Full async lifecycle | Phase 1 only |
| Phase 2 | N/A | max_poll_block, independent of TTL |
| On expiry | CallSkippedTTLExpired | Phase 1 skipped |
Short path rule: scheduler_ttl >= max_expected_drift + max_expected_settlement_blocks. If drift is ~3 blocks and settlement takes ~5, set TTL to at least 8.
Execution Index Encoding
The scheduler writes executionIndex into bytes 4-35 of your calldata before calling the target. Use 0 as a placeholder when encoding:
bytes memory callData = abi.encodeWithSelector(
MyContract.myFunction.selector,
uint256(0), // placeholder: overwritten with executionIndex
otherArg1,
otherArg2
);
schedule(). Scheduled txs bypass the sender lock. Multiple scheduled async jobs run in parallel from the same contract.
Secrets & ECIES
How to pass API keys and credentials to precompiles without putting them on-chain.
Your HTTP calls need API keys. Your LLM calls need provider tokens. You can't put these on-chain. They'd be visible to everyone. The Secrets system encrypts them with the TEE executor's public key. Only the enclave can decrypt.
Template Substitution
Reference your encrypted secret in request fields as {{SECRET_NAME}}. The TEE executor decrypts and substitutes before making the request. The plaintext never hits the chain or the mempool.
ECIES Encryption: Full Example
import { encrypt } from "eciesjs";
import { readContract } from "viem";
// 1. Get executor's public key from TEEServiceRegistry
const executorPubKey = await readContract(client, {
address: "0x9644e8562cE0Fe12b4deeC4163c064A8862Bf47F",
abi: teeRegistryAbi,
functionName: "getExecutorPublicKey",
args: [executorId],
});
// 2. Encrypt the secret (nonce MUST be 12 bytes)
const apiKey = "sk-proj-abc123...";
const encrypted = encrypt(
executorPubKey,
Buffer.from(apiKey, "utf-8")
);
// 3. Store encrypted secret and reference via {{API_KEY}} in request
const httpRequest = {
url: "https://api.openai.com/v1/chat/completions",
headerKeys: ["Authorization"],
headerValues: ["Bearer {{API_KEY}}"],
};
from ecies import encrypt
import os
executor_pubkey = get_executor_pubkey(executor_id)
plaintext = b"sk-proj-abc123..."
ciphertext = encrypt(executor_pubkey, plaintext)
PII Mode
piiEnabled is a boolean field on all async precompile requests: HTTP, LLM, Long HTTP, Agent, Multimodal. One flag, two effects.
piiEnabled = true: {{SECRET_NAME}} templates are resolved from encryptedSecrets before the request is sent. PII is redacted from results before on-chain settlement.
piiEnabled = false: no substitution, no redaction. {{SECRET_NAME}} literals are sent as-is to external APIs. Raw results go on-chain.
{{SECRET_NAME}} templates but piiEnabled is false, the literal string {{SECRET_NAME}} is sent to the API. Templates are not resolved. Rule: any {{SECRET}} template means piiEnabled = true.
LLM PII Requirements
LLM PII mode requires all three: piiEnabled = true, non-empty encryptedSecrets, and a 65-byte userPublicKey with 0x04 uncompressed EC prefix. Missing any one silently disables PII. Also: PII mode and streaming are mutually exclusive on LLM. Pick one.
Deployment
Copy-paste configs to get connected. Viem, wagmi, Foundry, Hardhat, and the testnet faucet.
Viem Chain Definition
import { defineChain } from "viem";
export const ritualChain = defineChain({
id: 1979,
name: "Ritual Chain",
nativeCurrency: { name: "RITUAL", symbol: "RITUAL", decimals: 18 },
rpcUrls: {
default: { http: ["https://rpc.ritualfoundation.org"] },
},
blockExplorers: {
default: { name: "Explorer", url: "https://explorer.ritualfoundation.org" },
},
});
Wagmi Config
import { createConfig, http } from "wagmi";
export const config = createConfig({
chains: [ritualChain],
transports: {
[ritualChain.id]: http(),
},
});
Foundry
# foundry.toml
[profile.default]
src = "src"
out = "out"
evm_version = "shanghai"
[rpc_endpoints]
ritual = "https://rpc.ritualfoundation.org"
Hardhat
import { HardhatUserConfig } from "hardhat/config";
import "@nomicfoundation/hardhat-toolbox";
const config: HardhatUserConfig = {
solidity: "0.8.24",
networks: {
ritual: {
url: "https://rpc.ritualfoundation.org",
chainId: 1979,
accounts: [process.env.PRIVATE_KEY!],
},
},
};
export default config;
Testnet Faucet
You need testnet RITUAL to call precompiles and deploy contracts.
Testing
How to test against precompiles that don't exist locally. Mock strategies for each layer.
Foundry Unit Tests
Sync precompiles work with normal Foundry tests. Call and assert. Async is trickier. Use vm.mockCall to fake precompile responses and vm.prank(ASYNC_DELIVERY) to simulate the executor calling your callback.
function testCallback() public {
bytes memory mockResult = abi.encode("agent response");
// Simulate AsyncDelivery calling our contract
vm.prank(0x5A16214fF555848411544b005f7Ac063742f39F6);
consumer.onResult(mockResult);
assertEq(consumer.lastResult(), "agent response");
}
Frontend Testing
For the frontend: Vitest for hook unit tests with mocked chain clients, Playwright for E2E against a testnet fork. The async flow is hard to test locally because you need a real executor to exercise the full path.
Glossary
Every term, acronym, and key concept used across these docs. Alphabetical.
| Term | Definition |
|---|---|
| ACE (Application-Controlled Execution) | Symphony's general framework for application-defined transaction ordering. Contracts specify ordering policies over call sequences with tiebreakers. Sequencing Rights is a restricted single-contract subset. Sequencing Rights |
| Agent Call | Stateless, one-shot agent precompile (two-phase async). Submit a task, receive a result via callback. Autonomous Agents |
| AsyncDelivery | System contract that delivers two-phase async results by calling back into consumer contracts. Callbacks must verify msg.sender == ASYNC_DELIVERY. System Contracts |
| AsyncJobTracker | System contract that tracks the 9-state lifecycle of every async job and enforces the sender lock. Emits JobAdded, Phase1Settled, ResultDelivered, JobRemoved on state transitions. Async Lifecycle |
| CKKS | Homomorphic encryption scheme for approximate arithmetic on encrypted floating-point tensors. Used by the FHE precompile to run inference on ciphertext. FHE Inference |
| DKMS | Decentralized Key Management System. Async SPC that derives deterministic secp256k1 keypairs inside TEE. Same owner + same keyIndex = same keypair every time. DKMS Keys |
| ECIES | Elliptic Curve Integrated Encryption Scheme. Asymmetric encryption used to encrypt secrets, agent inputs, and credentials to an executor's or DKMS-derived public key. Libraries: eciesjs (JS), eciespy (Python). Secrets & ECIES |
| Delegated execution | Execution path for non-deterministic or resource-intensive workloads (LLM, HTTP, agents). Runs once inside a TEE, result verified rather than replicated. Contrasted with replicated execution (standard EVM path). Superposition |
| Enshrined | Implemented at the protocol layer of the chain, not via external smart contracts or oracles. Applies to precompiles, TxPasskey, Sequencing Rights, and the Scheduler. |
| Execution-cost inversion | When execution latency exceeds consensus latency by 3-6 orders of magnitude (ML inference, Monte Carlo). The standard blockchain assumption that execution is cheap relative to agreement no longer holds. Non-Deterministic Execution |
| Executor | TEE-attested node that processes off-chain precompile requests. Registered in TEEServiceRegistry with capabilities, attestation proof, and a public key for ECIES encryption. |
| JQ | Synchronous precompile that evaluates jq expressions against JSON strings. String output requires _decodeJQString() for double-indirection decoding. HTTP Precompile |
| Persistent Agent | Stateful agent precompile with identity, memory, and data availability references. Persists across sessions via StorageRef. Revival from CID restores full state. Autonomous Agents |
| PII Mode | Boolean flag (piiEnabled) on all async precompile requests controlling secret template substitution and PII redaction from on-chain results. Any {{SECRET_NAME}} template requires piiEnabled = true. Secrets & ECIES |
| Predicate | Contract implementing IScheduledPredicate. The Scheduler calls shouldExecute via staticcall (100k gas limit) before each scheduled execution; returns false to skip. Scheduler |
| receipt.spcCalls | Extension field on Ritual Chain transaction receipts. receipt.spcCalls[0].output contains the ABI-encoded result from short-running async precompile calls. For long-running precompiles, the final result is delivered via AsyncDelivery callback, not spcCalls. Execution Models |
| RitualTensor | ABI-encoded tensor format for the ONNX precompile: uint256 shape array, dtype enum, flattened values. Classical Models |
| RitualWallet | System contract for prepaid fee escrow. Deposit RITUAL; the chain deducts per precompile call. Two-phase fees use EOA balance, not contract balance. RitualWallet |
| Sender lock | AsyncJobTracker constraint: one pending async job per EOA at a time. A second submission before the first settles reverts. Scheduled txs bypass this. Async Lifecycle |
| Sequencing Rights | Protocol-level rule where contracts declare function priority via sequencingRights() and the block builder orders transactions accordingly. Invalid ordering = invalid block. Sequencing Rights |
| Seven Properties | Seven requirements for a fully autonomous agent: Immortal, Emancipated, Teleportable, Financially sovereign, Web2-interoperable, Private, Computationally sovereign. Autonomous Agents |
| Sovereign Agent | CLI-style coding agent precompile running inside a TEE. Supports Claude Code, OpenClaw, ZeroClaw, Hermes, Codex, Aider. Inputs encrypted with ECIES. Autonomous Agents |
| Superposition | Ritual Chain running replicated (deterministic EVM) and delegated (TEE) execution over the same shared state, chosen per-transaction by workload type. Symphony paper terminology for the dual-path architecture. Superposition |
| SPC | Stateful PreCompile. Short-running async execution model where the result is returned to your contract via _executePrecompile() in the same transaction. One SPC call per transaction. Execution Models |
| StorageRef | Opaque identifier returned by the Persistent Agent after each invocation. Pass it back on the next call to resume context (HuggingFace, GCS, Pinata, or inline). Autonomous Agents |
| TEE | Trusted Execution Environment. Hardware-isolated enclave where executors run off-chain computation. Attestation proves honest execution and binds results to the originating request. |
| TEE-EOVMT | Trusted Execution Environment, EVM with Off-chain Verifiable Machine Tasks. Ritual Chain's architecture: the EVM delegates non-EVM-native computation to TEE executors, with results cryptographically bound to requests. Superposition |
| TxPasskey | Native transaction type 0x77. Users sign with biometrics (Face ID, fingerprint) or a security key via WebAuthn instead of a secp256k1 private key. Passkeys & Auth |
| TOCTOU (Time-of-check to time-of-use) | State drift risk between async commit and settle. Other transactions can change the state your callback depends on during the gap. No cross-async locks exist. Application responsibility. Execution Models |
| Two-phase async | Execution model for long-running operations. Phase 1 mines immediately (returns task ID). Phase 2: AsyncDelivery calls back into the consumer contract with the result, in a separate transaction. Execution Models |
| X402 | Encrypted credential injection protocol for pay-per-call API access. Runs on the HTTP precompile, not a separate address. Credentials encrypted with ECIES, substituted via {{SECRET_NAME}} inside TEE. X402 Payments |
FAQ
Why autonomous agents are coming, and why they need a chain built for them rather than retrofitted around them.
Why can't I just run an AI agent on any EVM chain with an off-chain bot?
Nothing stops you, but nobody can verify it. An off-chain bot that claims it ran a model and got a particular result is indistinguishable from a bot that fabricated the result. On Ritual, agent execution runs inside TEE enclaves. The executor's attestation is registered on-chain via TEEServiceRegistry and validated before the builder accepts results. Inputs are ECIES-encrypted to the executor's public key, so only the enclave can decrypt and execute. The result is bound to the request at the hardware level, not by social trust in a bot operator.
What does "enshrined" mean and why does it matter for agents?
Smart contracts cannot enforce transaction-level constraints on themselves. Enshrined means the agent infrastructure is part of the chain's execution layer: the block builder enforces the sender lock, the async lifecycle is tracked via transaction types (TxAsyncCommitment, TxAsyncSettlement), and the Scheduler is a system contract invoked by the block proposer. You cannot replicate these behaviors on a vanilla EVM chain because they require modifications to the block builder, the transaction pool, and the consensus rules. A Solidity library can approximate the API. It cannot approximate the enforcement.
How does Ritual prevent an executor from lying about what the LLM actually said?
Trusted Execution Environments (TEEs). The executor runs the model inside a hardware enclave that produces attestation evidence of what workload ran. This attestation is registered on-chain: TEEServiceRegistry stores the executor's public key, attestation hash, and capability set. The builder only accepts results from registered executors with valid, unexpired attestations. The executor cannot modify the output without invalidating the attestation, because the attestation covers the enclave's code and data.
Why do agents need their own keys? Can't they just use the deployer's wallet?
If the agent uses the deployer's key, the deployer can impersonate the agent, decrypt its stored state, or revoke its ability to transact. The agent is a puppet. DKMS derives a secp256k1 keypair inside the TEE, bound to the sender's Ethereum address. The private key never leaves the enclave. The agent's persistent state (DA content) is encrypted with this key, so no one outside the TEE can read it. The deployer funds the agent, the agent controls its own identity.
What happens to an agent's memory when the executor goes down?
The agent's state lives in external storage, not in the executor. StorageRef tuples point to HuggingFace, GCS, Pinata, or inline data. All DA content is encrypted with a DKMS-derived key that is bound to the sender's address, not the executor. If executor A goes down, executor B derives the same key via DKMS, downloads the encrypted state, and resumes. No migration step. The agent is portable across executors by default.
Why is one pending async job per wallet a feature and not a bug?
If you need N concurrent agents, use N wallets. The sender lock (enforced by AsyncJobTracker at the block builder level) prevents a single EOA from flooding the executor fleet with simultaneous commitments. The constraint is per-sender, not system-wide. For automation, the Scheduler bypasses the sender lock entirely because scheduled transactions are system transactions from a different sender (the block proposer). One wallet, one in-flight job, no exceptions except for scheduled calls.
How do agents pay for things without exposing credentials on-chain?
Secrets are ECIES-encrypted to the executor's public key before submission. The ciphertext is on-chain but unreadable outside the TEE. Inside the enclave, the executor decrypts the secrets JSON and replaces template placeholders ({{API_KEY}} in URLs, headers, or body fields) with the real values before execution. At no point does a plaintext credential appear on-chain, in logs, or in the transaction receipt. The encryption uses AES-256-GCM with 12-byte nonces. Getting the nonce length wrong is the single most common integration failure.
What stops someone from front-running an agent's transactions?
Within the async lifecycle, there is no mempool window to exploit. The builder creates TxAsyncCommitment as a system transaction and replays the original transaction with the result injected at settlement. The sender lock means the agent's EOA has exactly one pending commitment, so there is no second transaction to sandwich. For contract-level ordering beyond the async lifecycle, Sequencing Rights is a separate mechanism: a contract declares function priority, the builder orders transactions to match, and a block violating the declared ordering is invalid.
Can an agent call another agent?
Not in the same transaction. The one-async-precompile-per-transaction constraint is absolute. But Agent A's Phase 2 callback is a fresh transaction from AsyncDelivery, not a continuation of the original. Your callback contract can invoke Agent B's precompile in that callback. For concurrent execution, use separate wallet addresses. For sequential chaining, the Scheduler can orchestrate multi-agent workflows because it bypasses the sender lock.
Why build agents as precompiles instead of smart contracts?
Precompiles have access to primitives that smart contracts cannot touch. The sender lock is enforced by the block builder during transaction inclusion. A contract's require runs after inclusion, which is too late. ECIES decryption happens in the TEE, not in the EVM where all state is public. The async lifecycle is a transaction type, not an event pattern dependent on off-chain indexers. DKMS key derivation runs inside the enclave with no EVM-visible state. A smart contract can call a precompile, but it cannot replicate what the precompile does at the protocol level.
Ritual for Users
What you need to know if you are using dApps on Ritual, not building them.
Wallet Setup
Add Ritual Chain to MetaMask or any EVM wallet:
| Field | Value |
|---|---|
| Network Name | Ritual |
| Chain ID | 1979 |
| RPC URL | https://rpc.ritualfoundation.org |
| Currency Symbol | RITUAL |
| Explorer URL | https://explorer.ritualfoundation.org |
MetaMask: Settings → Networks → Add network → Add a network manually. Paste the values above. Or use a chain-list integration if your wallet supports adding by Chain ID.
Get Testnet Tokens
Visit faucet.ritualfoundation.org, connect your wallet or paste your address, and claim testnet RITUAL.
Ritual for Agents
AI coding agents that build dApps on Ritual without human code authorship.
What This Is
ritual-dapp-skills is a set of markdown instruction files that teach AI coding agents how to build applications on Ritual Chain. Every precompile, every contract pattern, every frontend hook, the full deployment pipeline. An agent reads the relevant skill files, asks 0-5 clarifying questions, and builds in phases: architecture, contracts, frontend, backend, testing, deployment.
Works with Claude Code (native plugin), Cursor (agent skills), Codex, OpenClaw, Hermes, and any LLM agent that reads markdown.
Agents Building Agents
An autonomous agent on Ritual Chain invokes a coding assistant (Claude Code, OpenClaw, Codex) inside a TEE enclave. That coding assistant reads the ritual-dapp-skills, generates contracts, deploys them, funds the RitualWallet, and hands back the deployment address. The original agent now has a child application running on-chain that it built, deployed, and funded. No human wrote code. No human approved a PR.
This works because every step in the pipeline is an enshrined precompile or system contract call. Compilation runs inside the TEE. Deployment targets the RPC directly. Fee deposits go through RitualWallet. If the child app fails post-deployment verification, the debugger agent activates automatically: it triages the failure, pattern-matches against known root causes, applies a fix, and re-verifies. The chain itself is the CI/CD.
The Skill System
The builder agent orchestrates the full lifecycle. It loads only the skills relevant to the project (3-6 per build), generates architecture, writes Solidity contracts, wires up React frontends with the right hooks, deploys via Foundry or Hardhat, and runs the 12-step verification journey. The debugger agent runs a 5-stage reactive pipeline: classify, smoke test, match known root causes, diagnose, fix and regression-check.
You give the agent an idea and a funded wallet address. Everything else is autonomous.
Periphery
Chain explorer, RPC endpoints, and faucet.
| Service | URL |
|---|---|
| Block Explorer | explorer.ritualfoundation.org |
| RPC (HTTP) | rpc.ritualfoundation.org |
| RPC (WebSocket) | rpc.ritualfoundation.org |
| Testnet Faucet | faucet.ritualfoundation.org |
Ed25519 Signatures
Your contract can verify Ed25519 signatures natively at ~2000 gas per call. Useful for Solana transactions, SSH keys, DKIM headers, and Tor identity proofs.
(publicKey, message, signature). Most Ed25519 libraries use (message, signature, publicKey). Wrong order returns false silently. It does not revert.
0x0009 verifies Ed25519 signatures natively. Solana transaction signatures, SSH public key auth, DKIM email headers, Tor relay identity keys: all Ed25519. This precompile validates any of them in a single EVM call at roughly 2000 gas.
Synchronous execution. Result comes back in the same call, no SPC callback. No RitualWallet deposit needed. No sender lock. You can chain this with other precompiles in the same transaction.
| Field | Type | Description |
|---|---|---|
publicKey | bytes | Ed25519 public key, 32 bytes |
message | bytes | Signed message, variable length |
signature | bytes | R || S concatenated, 64 bytes |
(bool success, bytes memory result) = address(0x0009).staticcall(
abi.encode(
pubKey, // bytes: 32-byte Ed25519 public key
message, // bytes: the signed message
sig // bytes: 64-byte R || S
)
);
// Returns uint256: 1 = valid, 0 = invalid (NOT bool)
uint256 valid = abi.decode(result, (uint256));
require(success && valid == 1, "invalid ed25519 signature");
Encode The Request
bytes memory input = abi.encode(
pubKey, // bytes: 32-byte Ed25519 public key
message, // bytes: signed message
sig // bytes: 64-byte R || S
);
const encoded = encodeAbiParameters(
parseAbiParameters("bytes, bytes, bytes"),
[pubKeyHex, messageHex, signatureHex]
);
from ritual_common.sigver.request import SignatureVerificationRequest
request = SignatureVerificationRequest(
public_key=pub_key_bytes, # 32 bytes
message=message_bytes,
signature=sig_bytes, # 64 bytes (R || S)
)
encoded = request.to_web3()
ZK Proofs
Your contract can request zero-knowledge proofs from the ZK precompile; the proof bytes arrive in a two-phase async callback.
Call 0x0806 to submit a proof generation job. An off-chain prover inside a TEE generates the proof, and the result is delivered to your contract through a Phase 2 callback. Your contract can then verify the proof and act on it. Prove creditworthiness without revealing financials, verify identity without exposing documents.
ExecutorRequest directly, not LongRunningRequest. The field offsets are different from FHE or agent precompiles. Don't swap addresses on a copied struct. It will revert.
| Index | Field | Type | Description |
|---|---|---|---|
| 0-4 | Base executor | various | Executor identity, payment, callback gas |
| 5 | inputEncrypted | bool | True if input is encrypted before submission |
| 6 | maxProofBlock | uint64 | Block deadline. Proof must land before this. |
| 7-12 | Delivery fields | various | Callback target, gas limit, payment |
| 13 | operationInput | bytes | Raw input to the proof circuit |
Callback
function onZKResultDelivered(
bytes32 jobId,
bytes calldata result
) external {
require(msg.sender == ASYNC_DELIVERY); // 0x5A16...F6, NOT the ZK precompile
// decode result, store or act on proof
}
Encode The Request
// 14-field ExecutorRequest encoding, passed as bytes calldata
function submitProof(bytes calldata zkInput) external {
_executePrecompile(ZK_TWO_PHASE_PRECOMPILE, zkInput);
}
const encoded = encodeAbiParameters(
parseAbiParameters("address, bytes[], uint256, bytes[], bytes, bool, uint64, address, bytes4, uint256, uint256, uint256, uint256, bytes"),
[executorAddress, [], 30n, [], "0x",
inputEncrypted, maxProofBlock,
callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
operationInput]
);
from ritual_common.zk import ZKTwoPhaseRequest
request = ZKTwoPhaseRequest(
executor=executor_address,
input_encrypted=True,
max_proof_block=current_block + 100,
operation_input=encrypted_data,
)
encoded = request.to_web3()
ExecutorRequest, not LongRunningRequest. This is the most common mistake when porting code from other precompiles. The call reverts with no useful error if the ABI layout is wrong. Verify your struct matches the 14-field layout above before debugging anything else.
FHE Inference
Your contract can run inference on encrypted data. Neither inputs nor outputs are ever visible to anyone except the key holder.
The FHE precompile at 0x0807 processes CKKS-encrypted tensors inside a TEE. Inputs and outputs both stay ciphertext throughout, and only the holder of the CKKS secret key can decrypt the result the callback returns. Use this when the data itself is sensitive (medical records, financial portfolios, private communications) but the computation still needs to happen on-chain.
The executor must have capability 10 (FHE). You pass an evaluation key reference so the executor can perform homomorphic operations on your ciphertext without seeing plaintext. CKKS does approximate arithmetic on encrypted floating-point tensors.
In Practice
contract PrivateInference is PrecompileConsumer {
function submitEncrypted(bytes calldata fheInput) external {
_executePrecompile(FHE_PRECOMPILE, fheInput);
}
function onFHEResult(
bytes32 jobId, bytes calldata result
) external {
require(msg.sender == ASYNC_DELIVERY);
// result is CKKS-encrypted output, only key holder can decrypt
}
}
Encode The Request
// 19-field encoding done off-chain, passed as bytes calldata
function submitEncrypted(bytes calldata fheInput) external {
_executePrecompile(FHE_PRECOMPILE, fheInput);
}
const encoded = encodeAbiParameters(
parseAbiParameters("address, bytes[], uint256, bytes[], bytes, string, bytes, bytes, bytes, uint8, uint64, address, bytes4, uint256, uint256, uint256, uint256, bytes, bytes"),
[executorAddress, [], 30n, [], "0x",
model, encryptedInput, encryptedInputRef, evkRef,
numLayers, maxInferenceBlock,
callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
encryptedInputStorage, encryptedOutputStorage]
);
from ritual_common.fhe import FHERequest
request = FHERequest(
executor=executor_address,
model="model-name",
encrypted_input=ciphertext,
evk_reference=evk_bytes,
num_layers=4,
max_inference_block=current_block + 500,
)
encoded = request.to_web3()
Reference
| Index | Field | Type | Description |
|---|---|---|---|
| 0-4 | Base executor | various | Executor identity, payment, callback gas |
| 5 | model | string | Which model to run |
| 6 | encryptedInput | bytes | CKKS-encrypted input tensor |
| 7 | encryptedInputRef | bytes | Off-chain reference to input (or 0x) |
| 8 | evkReference | bytes | Evaluation key for homomorphic ops |
| 9 | numLayers | uint8 | Layer count in the model |
| 10 | maxInferenceBlock | uint64 | Block deadline. Job dies after this. |
| 11-16 | Delivery fields | various | Callback target, gas limit, payment |
| 17 | encryptedInputStorage | bytes | Encrypted storage credentials for input |
| 18 | encryptedOutputStorage | bytes | Encrypted storage credentials for output |
maxInferenceBlock is a hard cutoff. If the executor misses it, no callback fires. The job is dead. There is no retry. Set it with enough headroom for your model's layer count, and check current block times before picking a number.
DKMS Keys
Your contract or agent can derive and hold its own secp256k1 keys directly from the chain, without a human custodian or off-chain key vault in the loop.
The DKMS precompile at 0x081B derives deterministic secp256k1 keypairs inside the executor's TEE. Same owner + same keyIndex = same keypair every time. The keys never leave the enclave. Even the contract's own code can't extract the raw key material. Agents use DKMS keys for DA encryption, wallet identity, and X402 shared credentials because the identity matters, not which executor runs the job.
Two encryption targets exist on Ritual Chain. The executor key (from TEEServiceRegistry) encrypts data to a specific node. The DKMS key encrypts data to an on-chain identity regardless of which node executes. Agent DA encryption, encrypted delivery, X402 shared credentials: these use DKMS keys because the identity matters, not the infrastructure.
| Field | Type | Description |
|---|---|---|
baseExecutor[0-4] | various | executor, encryptedSecrets, ttl, secretSignatures, userPublicKey |
owner | address | Address that owns this keypair |
keyIndex | uint256 | Derive multiple keys per owner by incrementing |
keyFormat | uint8 | 1 = secp256k1 |
bytes memory input = abi.encode(
baseExecutor, // fields 0-4
msg.sender, // owner
0, // keyIndex: first key for this address
1 // keyFormat: secp256k1
);
(bool success,) = address(0x081B).call(input);
// Use _executePrecompile() to get the result in the same tx:
// bytes memory output = _executePrecompile(address(0x081B), input);
// (address derivedAddr, bytes memory pubKey) = abi.decode(output, (address, bytes));
Encode The Request
bytes memory input = abi.encode(
executor, // address
new bytes[](0), // encryptedSecrets
uint256(30), // ttl
new bytes[](0), // secretSignatures
bytes(""), // userPublicKey
msg.sender, // owner
uint256(0), // keyIndex
uint8(1) // keyFormat: secp256k1
);
const encoded = encodeAbiParameters(
parseAbiParameters("address, bytes[], uint256, bytes[], bytes, address, uint256, uint8"),
[executorAddress, [], 30n, [], "0x", ownerAddress, 0n, 1]
);
from ritual_common.dkms_key import DkmsKeyRequest
request = DkmsKeyRequest(
executor=executor_address,
owner=owner_address,
key_index=0,
key_format=1, # secp256k1
)
encoded = request.to_web3()
DKMS_ENABLED=true in executor config. The executor must advertise Capability DKMS=6. Without this, the precompile call reverts.
X402 Payments
Your contract can call paid APIs without surfacing keys on-chain. Credentials are ECIES-encrypted to the executor and billed per request through the X402 protocol.
X402 works through the HTTP precompiles (0x0801 and 0x0805) with encrypted payment credentials injected by the TEE. There is no separate X402 precompile address. You encrypt API credentials with ECIES to the executor's public key, sign each encrypted blob with EIP-191, and pass them alongside your HTTP request. The executor decrypts inside TEE, substitutes credentials into {{SECRET_NAME}} placeholders, then makes the external call. Your secrets never touch the chain.
Budget tracking lives in your consumer contract. Each X402 call deducts from your allocated budget. To share credentials with other addresses without exposing them, use SecretsAccessControl and call grantAccess(address, secretName).
In Practice
contract PaidAPIConsumer is PrecompileConsumer {
function callPaidAPI(bytes calldata httpInput) external {
// httpInput includes encryptedSecrets with API key
// and piiEnabled=true for {{SECRET_NAME}} substitution
bytes memory output = _executePrecompile(HTTP_CALL_PRECOMPILE, httpInput);
(uint16 status, , , bytes memory body, ) =
abi.decode(output, (uint16, string[], string[], bytes, string));
require(status == 200);
}
}
Encode The Request
X402 uses the same 13-field HTTP ABI. The difference: encryptedSecrets contains your API credentials, piiEnabled is true, and the URL/headers use {{SECRET_NAME}} placeholders.
// Same as HTTP encoding, but with encrypted credentials
// encryptedSecrets = [ecies.encrypt(executorPubKey, apiKeyBlob)]
// piiEnabled = true
// URL uses {{API_KEY}} placeholder
import { encrypt } from "eciesjs";
const apiSecret = JSON.stringify({ API_KEY: "sk-..." });
const encrypted = encrypt(executorPubKey, Buffer.from(apiSecret));
// Encode as standard HTTP request with piiEnabled=true
const encoded = encodeAbiParameters(httpParams, [
executorAddress,
[encrypted], // encryptedSecrets
30n, [signature], "0x",
"https://api.openai.com/v1/chat/completions",
2, // POST
["Authorization"], ["Bearer {{API_KEY}}"],
body, 0n, 0,
true, // piiEnabled: activate substitution
]);
from ritual_common.http_call.request import HTTPCallRequest, HTTPMethod
from ritual_common.executor.base import ExecutorRequest
secrets = {"API_KEY": "sk-..."}
encrypted = ExecutorRequest.encrypt_secrets(secrets, executor_pub_key)
request = HTTPCallRequest(
executor=executor_address,
encrypted_secrets=[encrypted],
url="https://api.openai.com/v1/chat/completions",
method=HTTPMethod.POST,
headers={"Authorization": "Bearer {{API_KEY}}"},
pii_enabled=True,
)
encoded = request.to_web3()
Reference
| Field | Type | Description |
|---|---|---|
encryptedSecrets | bytes[] | ECIES-encrypted credential blobs |
secretSignatures | bytes[] | EIP-191 signature over each encrypted blob |
piiEnabled | bool | Set true to activate credential substitution |
piiEnabled is false, the executor makes the HTTP call without substitution and ignores encrypted secrets entirely.
Sequencing Rights
Your contract can enforce transaction ordering at the consensus layer. Block validity itself rejects orderings that violate your contract's sequencing policy, so MEV-extractive reorderings can't be included in a block.
Declare which functions must execute in which order, and the chain rejects any block that violates it. This is a protocol-level rule, not a precompile. Your contract implements sequencingRights() and the block builder is bound by it. A block that violates the declared ordering is invalid.
In the Symphony paper, this is a restricted form of Application-Controlled Execution (ACE): a general framework where contracts define ordering policies over call sequences with tiebreakers and multi-contract coordination. The current ISequencingRights interface implements the single-contract, priority-list subset of ACE. The broader mechanism (cross-contract ordering, lazy evaluation batches, wrapping bypass rules) is described in the paper but will be released soon ™)
Interface
interface ISequencingRights {
function sequencingRights() external view returns (bytes4[][] memory);
}
function sequencingRights() external pure returns (bytes4[][] memory) {
bytes4[][] memory levels = new bytes4[][](2);
levels[0] = new bytes4[](1);
levels[0][0] = this.deposit.selector;
levels[1] = new bytes4[](1);
levels[1][0] = this.withdraw.selector;
return levels;
}
Proposer Disaggregation
Breaking the block proposer's atomic bundle of powers into separable, protocol-enforced assignments.
In every existing blockchain, the block proposer holds an atomic bundle of four powers: inclusion (which transactions enter the block), exclusion (which are kept out), sequencing (what order they execute in), and timing (when state-dependent actions trigger). No existing protocol disaggregates all four. Proposer-builder separation (PBS) splits building from proposing but leaves the bundle intact within the builder role. MEV-aware protocols constrain ordering but do not transfer it to applications.
Symphony breaks this bundle apart and reassigns each power to a different enforcement layer:
| Power | Assigned To | Mechanism |
|---|---|---|
| Inclusion | Protocol | User Forced Inclusion (UFI) |
| Exclusion | Protocol | Application Or User Forced Exclusion (AOUFE) |
| Sequencing | Applications | Application-Controlled Execution (ACE) |
| Timing | State predicates | Conditional triggers with staleness 0 |
Each reassignment is enforced as a conjunct of the block validity function. A block that violates any assignment is invalid and rejected by validators. The proposer retains residual freedom only over transactions and orderings not claimed by any active predicate.
Non-Deterministic Execution
Why replicated execution breaks for ML workloads and how Symphony solves it.
The Execution-Cost Inversion
Blockchain consensus optimizes for the assumption that transaction execution is cheap relative to agreement. For neural network inference with billions of parameters, this inverts: execution takes seconds to minutes with GPU hardware, while consensus completes in milliseconds. Requiring every validator to own a GPU and re-run every inference defeats decentralization. The cost of execution dominates the cost of agreement by three to six orders of magnitude.
Structural, Not Economic
The replication barrier is not cost alone. Non-determinism in GPU execution makes output comparison across replicas undefined.
Hardware non-reproducibility. Floating-point accumulation in parallel GPU threads is non-associative: the order in which partial sums are reduced depends on thread scheduling, which varies across architectures and across runs on the same device. Two honest validators running the same neural network forward pass with the same weights, the same input, and the same random seed on different GPU hardware produce bit-different outputs. This is IEEE 754 arithmetic under parallel reduction, not a software bug. Deterministic GPU modes exist but impose 10-30x slowdown.
Algorithmic randomness. Independently of hardware, many target workloads are intentionally randomized. Neural network inference with temperature sampling (T > 0), Monte Carlo simulation, and probabilistic optimization all map inputs to distributions over outputs. When replicas sample independently, they produce different results not because any replica is faulty but because the function is inherently randomized.
Either source of randomness breaks replication. Together they make it untenable for the target workload class.
Symphony's Solution: Verify, Don't Replicate
Correctness for these workloads means proving that a given output is consistent with the computation, not comparing outputs across replicas. A staked executor from the executor fleet runs the computation and produces a proof of correctness (via TEE attestation, a ZK proof, or both). Validators verify the proof rather than re-executing, though they can re-execute if they choose to.
Three Randomness Models
Randomized workloads register one of three randomness models at deployment time. The choice determines how the seed is sourced and what the proof attests:
| Model | Source | Properties |
|---|---|---|
| Sealed-seed | TEE enclave | User encrypts seed to TEE attestation key. Executor cannot observe or grind the seed. |
| Executor-chosen | Executor | Executor selects entropy. Proof shows y = F(x; r) for executor-selected r. Establishes support membership but not distributional fairness. |
| Protocol-derived | On-chain VRF or beacon | Public, reproducible, latency-bound by chain finality. Makes the output predictable from the public seed. |
Workloads
Workloads divide into deterministic (same inputs always produce same outputs, replicated execution) and randomized (outputs depend on entropy, delegated execution with proofs).
Formal Framework
System model, workload definitions, proof systems, and the extended state machine that underpins Symphony.
System Model
Symphony assumes partial synchrony with at most f < n/3 Byzantine validators and a quorum threshold of > 2/3 weighted stake. Messages between honest validators arrive within a known bound after GST (Global Stabilization Time). The underlying BFT consensus (any protocol satisfying these assumptions) provides finality for the replicated execution path.
| Parameter | Constraint |
|---|---|
| Network model | Partial synchrony |
| Byzantine tolerance | f < n/3 |
| Quorum threshold | > 2n/3 stake-weighted |
| Finality | Deterministic (BFT) |
Workloads
Workloads divide into two classes: deterministic (same inputs always produce same outputs, suitable for replicated execution) and randomized (outputs depend on entropy sources, requiring delegated execution with proofs).
Randomness Models
Randomized workloads register one of three randomness models at deployment time:
| Model | Source | Properties |
|---|---|---|
| Sealed-seed | TEE enclave | Hardware-attested, executor cannot influence seed selection |
| Executor-chosen | Executor | Executor selects entropy; verifiable only via output consistency with chosen seed |
| Protocol-derived | On-chain VRF or beacon | Public, reproducible, latency-bound by chain finality |
Non-Interactive Proof Systems
Each verification system $V_i$ satisfies three properties: soundness (a dishonest prover cannot convince the verifier of a false statement except with negligible probability), completeness (an honest prover always convinces the verifier), and latency (proof generation completes within a bounded time window). Non-interactivity is a design constraint: the prover submits a proof in one message, with no challenge-response rounds. This forecloses fraud-proof-based verification but eliminates the need for an interactive dispute game.
Product Lattice
The verification state of a delegated output is a vector in $\{0,1\}^m$ where $m$ is the number of independent verification systems. This forms a product lattice under componentwise ordering. The bottom element $0^m$ means no system has verified. The top element $1^m$ means all systems have attested. Intermediate positions represent partial verification. Applications declare minimum requirements (upsets) over this lattice.
Predictable Validity
State predicates (used in UFI, AOUFE, and conditional triggers) require staleness 0: the predicate evaluates against the state as of the current block, not a lagged snapshot. This rules out high-latency oracle designs for trigger conditions. Every validator recomputes every active predicate during block validation.
Extended State Machine
Symphony extends the standard EVM state machine with three additional components: async registries (tracking pending delegated computations and their executor assignments), trigger pools (storing active UFI/AOUFE predicates with expiry), and ordering policies (ACE registrations mapping contracts to their declared sequencing rules). Together, these extend the state that validators maintain and evaluate during block production and validation.
Superposition of Execution Models
Replicated and delegated execution over shared state, with two-phase saga settlement.
Superposition
Symphony runs two execution modes over the same state simultaneously. Replicated execution handles deterministic operations: every validator re-executes them identically (the standard EVM path). Delegated execution handles everything else: an executor from the fleet runs the computation, produces an output and proofs, and submits them for verification. Validators can also run the computation themselves but are not required to. Both modes read from and write to the same state tree.
The choice of mode is determined per-workload at registration time. Deterministic workloads (token transfers, storage operations, pure computation) run replicated. Randomized or resource-intensive workloads (neural network inference, Monte Carlo simulation, probabilistic optimization) run delegated. The two modes coexist within a single block.
Cross-Mode Reads
Replicated code reads outputs produced by delegated execution, but only when the output's lattice position satisfies the reading application's declared upset. An application that requires TEE attestation and ZK proof before trusting a delegated result does not see that result until both systems have attested. Until then, reads return the pre-delegation state.
Two-Phase Saga
Phase 1 (Commit): A staked executor is assigned and its bond is locked. A TTL (time-to-live) is set. The computation begins.
Phase 2 (Settle): The executor submits the output along with proofs. Verification systems evaluate the proofs. The lattice position updates. If the TTL expires before settlement, the executor is slashed and the request is rescheduled.
There are no cross-async locks. Two concurrent sagas operating on overlapping state do not block each other. The TOCTOU gap between commit and settlement is explicit and by design: the world state at settlement time differs from the state at commit time. Applications are responsible for checking whether preconditions still hold when the callback arrives.
Resonance
A market mechanism for heterogeneous computation.
Ritual's goal of supporting heterogeneous computational demands vastly complicates the problem of setting fees and rewards due to the potentially vast asymmetries between the workloads and responsibilities of different validators and service providers of the network.
Ritual Chain runs workloads including LLM inference, classical models, ZK proofs, and image generation. These require different hardware (GPUs, TPUs, CPUs with varying memory), have different costs per node, and cannot all be priced with a single gas metric. Existing approaches such as multi-dimensional pricing (pricing each resource dimension separately) can yield arbitrarily poor allocations in this setting: we actually prove this formally.
The Mechanism
Over the past two years, we've thought about this problem from first principles. We'd like to maximize the economic value of the transactions that are executed by the network while also respecting the incentives of both users and network service providers. Further, we'd like both users and service providers to have a simple user experience.
We've developed a new market mechanism from scratch to satisfy these properties. At a high level, it works by utilizing the services of sophisticated market-makers. These market-makers compete to find valuable allocations of compute workloads to service providers and prices that will be accepted by all parties involved.
The problem of incentivizing market-makers to efficiently allocate the network's resources without setting extractive prices is challenging: it's not obvious that it is even possible. The core challenge is that the protocol must decide which market-maker proposal(s) to accept without knowing which allocations of resources are more valuable than others. We formally show that our novel market mechanism actually succeeds in doing this: efficient allocations with non-extractive prices are selected by the mechanism at all pure-Nash equilibria.
Further Reading
We've written about this mechanism in multiple iterations. In our most recent mega-post about it, we give a thorough and formal explanation of the general setting that the market mechanism works in, as well as a step-by-step explanation of why the mechanism works the way that it does. That post builds on our previous work on the Resonance mechanism.
Verifiable Computation
Delegated execution produces outputs and proofs. SNARK circuits, workload decomposition, and committee verification.
Every delegated execution produces two artifacts: the output (the result of the computation) and one or more proofs (cryptographic evidence that the output is consistent with the registered workload and inputs). The verification systems consume these proofs and update the lattice position accordingly.
SNARK Circuits
SNARK circuits operate over finite fields, eliminating hardware non-determinism entirely. A circuit that encodes an ML inference computes over field elements, not floating-point numbers. The proof attests that the circuit was evaluated correctly on the given inputs. Two different provers running the same circuit on the same inputs produce different proofs, but both verify against the same verification key.
The tradeoff: finite-field arithmetic is orders of magnitude slower than native GPU computation. Proof generation for a large model takes minutes to hours. Symphony accommodates this by treating proof latency as a first-class design parameter, not a deficiency to be hidden.
Workload Decomposition
Large workloads decompose along three axes to reduce per-shard proof complexity:
| Axis | Strategy | Example |
|---|---|---|
| Compositional hierarchy | Split computation into sequential stages | Tokenization, embedding, attention layers, output projection |
| Parameter symmetry | Exploit repeated structure in model weights | Identical attention heads proved once, reused across layers |
| Repeated structure | Batch identical sub-computations | Token-level operations across sequence length |
Shard Proofs
Each decomposed unit produces a shard proof. Adjacent shards satisfy boundary consistency: the output of shard k matches the input of shard k+1. Adjacent shards must satisfy boundary consistency: the output of shard k matches the input of shard k+1, with linking cost proportional to the boundary width between shards.
Committee Verification
Committees are assigned to verification systems via deterministic selection based on stake weight and a per-epoch seed. Each committee member independently verifies the shard proofs assigned to them. Attestation requires a threshold of committee members to agree. Disagreement within a committee triggers the dispute path in the verification lattice.
Verification Lattice
A product lattice over multiple proof systems tracking the verification state of every delegated output.
Each delegated output o has a verification state vector $\sigma(o) \in \{0,1\}^m$ where $m$ is the number of independent verification systems. Dimension $i$ is 0 if system $V_i$ has not yet attested, and 1 if it has. The product lattice orders these vectors componentwise: $\sigma \leq \sigma'$ when every component of $\sigma$ is at most the corresponding component of $\sigma'$.
Monotonicity
Lattice transitions are monotonic. Once dimension i flips from 0 to 1, it stays at 1. There is no un-verifying. A committee attestation is permanent. This guarantees that the verification state of any output only improves over time.
Disputes And Quarantine
When members of a verification committee disagree on the validity of a proof, the output enters the Disputed state. Disputed outputs are quarantined: any replicated-mode read that depends on the disputed output returns the pre-delegation value. The dispute resolution mechanism (slashing, re-verification by a fresh committee, or escalation to a supermajority vote) determines whether the output is accepted or rejected.
Upsets
Applications declare upsets: subsets of lattice positions that represent sufficient verification for their purposes. An application requiring both TEE attestation (system 1) and ZK proof (system 2) declares the upset {σ : σ_1 = 1 ∧ σ_2 = 1}. A delegated output becomes readable by that application only when its lattice position enters the declared upset. Different applications declare different upsets over the same output.
Economic Safety
Fast verification systems (TEE attestation, committee-based checks) produce results in seconds. Slow systems (ZK proofs for large models) take minutes to hours. The gap between fast and slow verification is bridged by economic safety: executors post bonds that are slashed if the slow proof contradicts the fast attestation. During the gap, applications that trust only the fast system proceed, accepting the economic risk. Applications that require the slow system wait.
Degradation Protocol
When a verification system becomes unresponsive or produces contradictory results, the degradation protocol activates:
| Step | Action |
|---|---|
| 1. Detect | Governance or manual process identifies the compromised or unresponsive system (the paper does not specify automatic detection) |
| 2. Mark | System is flagged as degraded in the protocol state |
| 3. Notify | Applications with upsets depending on the degraded system receive notification |
| 4. Quarantine | New outputs pending verification by the degraded system are quarantined |
| 5. Halt | If recovery fails within the timeout, the system is removed from active verification |
Block Validity
Six conjuncts compose the block validity function. Each active predicate constrains proposer freedom.
A block B is valid against state S if and only if all five conjuncts hold:
Valid(B, S) = Structural ∧ Execution ∧ InclusionComplete ∧ ExclusionCompliant ∧ OrderingCompliant ∧ MiniBlocksCompliant
| Conjunct | Validates |
|---|---|
| Structural | Block format, signature, parent hash, timestamp bounds, gas limits |
| Execution | State transition correctness for all replicated transactions; proof validity for all delegated outputs |
| InclusionComplete | Every active UFI trigger whose predicate holds in S has its required transaction present in B |
| ExclusionCompliant | No transaction in B matches an active AOUFE exclusion scope |
| OrderingCompliant | Transactions targeting ACE-registered contracts appear in the order declared by those contracts' policies |
| MiniBlocksCompliant | Each equivalence class of transactions (scheduled, async, canonical) stays within its bounded allocation of block space. Canonical transactions have no bound. |
Mini Blocks
A block is partitioned into bounded regions ("mini blocks") by transaction equivalence class. Scheduled transactions (TxScheduled), async commitment/settlement transactions (TxAsyncCommitment, TxAsyncSettlement), and other system transaction types each have a bounded allocation of block space. Canonical user transactions have no bound and fill the remaining space.
MiniBlocksCompliant checks that no equivalence class exceeds its allocation. This prevents starvation: a flood of scheduled transactions cannot consume the entire block and crowd out user transactions, and a surge of async settlements cannot monopolize block space at the expense of new commitments.
Residual Proposer Freedom
The proposer starts with full discretion over the block's contents. Each active predicate removes a degree of freedom. A UFI trigger forces a specific transaction into the block. A AOUFE rule removes a class of transactions from eligibility. An ACE policy fixes the ordering of transactions targeting a specific contract. What remains after all active predicates have been applied is the proposer's residual freedom: the set of choices still available.
As predicates accumulate, residual freedom shrinks. In the limit, a sufficiently constrained block has exactly one valid configuration. Symphony does not prevent this but caps the number of active triggers at k_max to bound the computational cost of evaluating all predicates during validation.
Predictable Validity
State predicates used in UFI, AOUFE, and conditional triggers evaluate against the current block's state with staleness 0. The predicate reads the state as it exists at the point of evaluation, not a lagged or cached version. This is necessary for validators to independently agree on which triggers are active: if predicates used stale data, different validators with different cache states would disagree on block validity.
Forced Inclusion
UFI and AOUFE: protocol-enforced transaction inclusion and exclusion based on state predicates.
User Forced Inclusion (UFI)
A UFI trigger registers a predicate P(S) and a transaction T. When P(S) evaluates to true against the current state and the trigger has not expired, T must appear in the block. A block that omits T while P(S) holds is invalid. The inclusion is same-block and non-interactive: no dispute game, no delay, no challenge period. Either the transaction is present or the block is rejected.
Lattice promotions are a primary trigger for UFI. When a delegated output's verification state crosses an application's upset threshold, the corresponding settlement transaction fires via UFI. This guarantees that verified results reach the chain without proposer discretion.
Application Or User Forced Exclusion (AOUFE)
AOUFE provides scoped exclusion. A contract registers a AOUFE rule that specifies a matching pattern: any transaction targeting the registering contract that matches the pattern is invalid. The scope is limited to the registering contract's own address. A contract cannot use AOUFE to exclude transactions targeting other contracts.
The primary use case is degradation. When a verification system enters a degraded state, the affected contract activates AOUFE to reject new requests until the system recovers. This prevents users from submitting work that cannot be verified.
Conflict Resolution
UFI and AOUFE conflict when a UFI trigger forces transaction T into the block but a AOUFE rule on T's target contract excludes it. The default resolution: UFI overrides AOUFE. Forced inclusion takes precedence over forced exclusion. This prevents a contract from griefing the inclusion mechanism by activating AOUFE on all incoming transactions. This is the default. Applications can declare the inverse (AOUFE-overrides-UFI) at registration time for safety-critical cases.
Triggering Mechanisms
Two primary events fire these mechanisms. Lattice promotions trigger UFI: when an output's verification state reaches a threshold, settlement is forced. Verification system degradation triggers AOUFE: when a system fails health checks, contracts that depend on it activate exclusion to prevent unverifiable submissions.
Ordering Constraints
Application-Controlled Execution: giving smart contracts power over transaction sequencing.
The Problem
Applications on blockchains have no control over how their transactions are ordered within a block. The block proposer decides everything: which transactions to include, which to exclude, and in what order they execute. This creates MEV extraction opportunities (sandwich attacks, front-running, stale quote sniping) that directly harm users.
Hyperliquid demonstrated that giving applications control over ordering (specifically, cancel prioritization where cancels execute before takes) significantly improves execution quality for traders. But Hyperliquid is an app-chain. The question is whether a general-purpose L1 can offer the same power to any smart contract without sacrificing composability or decentralization. Our analysis of application-controlled execution lays out the design space and tradeoffs across app-chains, async message queues, off-chain batching, and protocol-enforced commitments.
ACE (Application-Controlled Execution)
Each contract C registers an ordering policy $\pi_C$: a priority-ordered list of recognized call sequences, with optional tiebreakers evaluated on finalized state. The proposer builds the block respecting all active policies. A block that violates any registered policy is invalid.
Enforcement
Ordering enforcement operates at the consensus level. Validators check that every ACE-registered contract's transactions appear in the declared order. Invalid ordering produces an invalid block. Validation is cheap. Building a valid block is a different problem entirely.
MEV Implications
ACE constrains MEV extraction within the scope of individual applications. But, MEV does not disappear. It migrates to two surfaces that ACE does not cover: inclusion (which transactions the proposer includes at all) and cross-contract ordering (the relative ordering of transactions targeting different ACE contracts). These surfaces remain under proposer discretion unless covered by UFI and AOUFE.
Single-Contract Ordering
When transactions each target a single ACE-registered contract, ordering is straightforward. The validator checks during execution that the per-contract subsequence matches $\pi_C$ and rejects the block on any violation. Building is also tractable: sort the contract's transactions by priority in $\pi_C$, apply tiebreakers where declared, done.
Multi-Contract Ordering
When transactions touch multiple contracts with different ordering policies, block building gets combinatorially harder. The paper gives two concrete examples.
Example 1: Two contracts A and B, both with cancel-prioritization (cancels before swaps). Transaction $T_1$ does Cancel(A) + Swap(B). Transaction $T_2$ does Cancel(B) + Swap(A). Placing $T_1$ first satisfies A's ordering but violates B's. The reverse violates A's. Neither ordering is valid. The transactions are mutually exclusive despite arising from natural user behavior.
Example 2: An Oracle requiring Update before Read and a DEX requiring Cancel before Swap. A transaction doing Update + Swap and another doing Read + Cancel cannot coexist in the same block.
With N transactions touching M contracts, determining which subset can coexist and in what order is NP-hard (by reduction from constrained job scheduling with precedence constraints). Validation stays cheap per contract. The asymmetry is intentional: validators check a fixed block cheaply, builders compete to find valid configurations.
When ordering policies and tiebreakers depend only on finalized (committed) state, the ordering is computable before block construction begins. Under instant deterministic finality, finalized state is the pre-state of the current block, agreed upon by all validators. The Monotone Priority System (MPS) is the unique system satisfying five axioms: existence of valid blocks, baseline per-contract priority, propagation through references, reducibility, and independence of irrelevant calls. Block building reduces to sorting by priority. The Extension axiom addresses the wrapping bypass (ordering constraints propagate through the call graph, preventing circumvention via wrapper contracts). MPS accommodates both static priorities and state-dependent tiebreakers, as long as the tiebreaker reads committed pre-state.
Stateful Ordering: The Circularity Problem
MPS works when ordering constraints depend only on finalized state, the committed state before the current block. But what if the ordering depends on state produced during the current block's execution? This creates a circular dependency. Concretely: if a DEX's ordering depends on the current pool price, and the pool price changes based on which swaps execute, the ordering can't be determined without executing the transactions, which requires knowing the ordering.
In general, resolving this circularity is intractable. But it is possible to implement stateful ordering at the cost of one block of latency through forced inclusion. User transactions do not execute application logic directly. Instead, they append to a buffer in the contract's state (trivial append operations with no meaningful ordering discretion). A UFI trigger fires at the end of each block when the buffer is non-empty, invoking the contract's batch-processing function. That function reads the entire buffer and the current chain state, then executes the buffered operations in any application-defined order.
Early to Everything
Ritual was at the genesis of every Crypto × AI evolution, letting us lay the groundwork for net-new user or agent behavior before anyone else.
Ritual emerges from stealth, born from our conviction that AI will completely change how crypto products are used. Our initial architecture outlines a sovereign execution layer purpose-built for AI compute, pioneering a modular precompile and node specialization architecture.
Since then, every chain has converged to ~nearly the same design we pioneered.
We publish v0.1.0 of Infernet, the first decentralized oracle network (DON) purpose-built for AI workloads, which can be integrated into any EVM smart contract in under 10 lines of code.
Today, Infernet is the most popular AI DON backed by 9,000+ nodes around the world, used by tens of thousands of users daily, with over 10 million transactions on Base alone.
We build Frenrug, the first on-chain AI agent, controlled by a combination of LLM and Classical ML models, powered by Infernet, managing >$30,000 on Base.
We are the first to combine text-based transformer models with action-based classifier models to execute on-chain actions. Frenrug informs our design for what Agents should look like.
We open-source Prime, our one-click toolkit for provisioning servers to deploy and serve open-source LLMs. Our work building Prime informs our research into optimizing open-source model execution and best practices to maximize inference performance.
We come to the conclusion that specialized use cases demand specialized cryptographic gadgets, not general-purpose ones. We begin working on a suite of primitives across provenance, privacy, and computational integrity to give users full flexibility of choice.
These eventually materialize into our suite of modular computational integrity gadgets and verifiable provenance primitives.
Ritual is the only blockchain architecturally flexible enough to let applications choose the primitives that best fit their use case. All other chains must take a one-size-fits-all approach.
We open-source infernet-ml, the first-of-its-kind framework to deploy ML-enabled dApps on-chain. This gives us applied insight into how developers build novel AI experiences, informing our developer experience decisions when building Ritual Chain.
Developers use infernet-ml to bring ONNX & Torch models on-chain, use Stable Diffusion to mint NFTs from prompts, and connect their smart contracts to LLMs.
We publish Ritual Learn, introducing a crypto audience to the underpinnings behind ML, with hands-on guides to building dApps on Ritual.
Infernet reaches v1.0.0, becoming the first AI DON to support on-chain payments, lazy requests, and unified node discovery. Infernet also becomes the first AI compute mesh to undergo holistic audits; Trail of Bits and Zellic find no critical issues.
We announce Altar, our full-stack program to support ambitious protocols building on Ritual.
Initial applications include Anima: Multi-agent transaction framework, Opus: Memecoins meet AI, Relic: ML-enabled AMM, and Tithe: ML-enabled lending.
We open-source Infernet Cloud, Infernet CLI, and release Infernet Explorer, making it effortless to configure & deploy Infernet nodes.
Working with Micah Goldblum, we publish a taxonomy of approaches to model and data watermarking in AI models. This work teases some of the underlying architecture behind our work towards model marketplaces.
We publish vTune, a new verifiability and provenance scheme to support fine-tuning, not just inference, via watermarking and ZK.
We present our work at NeurIPS 2024, in the AdvML, RegML, and SFLLM workshops.
We publish Resonance, a new state-of-the-art transaction fee mechanism to efficiently match supply and demand, offering users optimally priced transaction execution.
Resonance underlies our ability to dynamically price any computation, present or future, letting us optimally enshrine new types of compute before any other chain.
Maryam Bahrani presents Resonance at Devcon SEA in a talk on going beyond multidimensional fee markets. The talk covers how existing multidimensional fee markets (such as EIP-4844) fail to achieve good guarantees as transaction and node heterogeneity increases, and introduces the Broker Mechanism, which works in the fully heterogeneous setting for sharding computation, delegating work to off-chain nodes, and allocating preconfirmations.
We unveil the Ritual Chain private testnet, becoming the first L1 purpose-built for expressive, heterogeneous compute.
Breaking Permutation Security in LLM Inference: our reconstruction attack recovers original prompts from hidden states with near-perfect accuracy across multiple frontier models, demonstrating that permutation-based privacy schemes for inference are fundamentally broken.
Accepted at ICML 2025.
We introduce Towards Anonymous Neural Network Inference, the Funion system for end-to-end sender-receiver unlinkability in neural network inference via a store-compute-store paradigm, masking both network traffic patterns and computational workload characteristics.
We release Cascade: Token-Sharded Private LLM Inference. Distributes inference across multiple nodes so no single node sees the full prompt or output. Orders of magnitude faster than secure multi-party computation. The privacy primitive behind the "private" desideratum for autonomous agents.
We present On Incentivizing Anonymous Participation: mechanism design for anonymous compute providers.
Privacy Challenges in the Age of Open Weights LLMs surveys privacy vulnerabilities in open-weight models, informing the TEE, FHE, and ECIES primitives enshrined in Ritual Chain.
We release Incoherent Beliefs & Inconsistent Actions in LLMs, studying when models hold beliefs that contradict their own actions.
Accepted at NeurIPS 2026.
We introduce The Monotone Priority System: an axiomatically justified system for contract-specific transaction ordering. Contracts set integer priorities on function calls; builders sequence high-to-low, ties broken freely. The unique system satisfying five independent axioms.
Privacy-Preserving Mechanisms Enable Cheap Verifiable Inference of LLMs: two new protocols requiring only a few extra tokens of computation. A cheaper alternative to ZK for verifying third-party inference.
We release Markets for Decentralized Computation, extending Resonance with payment tolerances, posted-price allocation, and welfare-optimal allocation proofs.
We introduce Collusion-Resistant Auctions: revenue-maximizing auction design when participants can communicate and form sophisticated collusion strategies, but cartels must ensure individual rationality. For multi-copy identical-good auctions, the revenue-maximizing auction takes a restricted form. Submitted to EC.
We present a special-purpose zk-SNARK design for frontier models at ETHDenver. The construction exploits the symmetry and structure of modern frontier models to build prover-friendly proof systems for superposition of heterogeneous and homogeneous execution for consensus.
We present Symphony, our execution-aware consensus protocol. Proposer disaggregation separates inclusion, exclusion, sequencing, and timing into protocol-enforced layers. Non-deterministic execution support for ML workloads. Superposition of replicated and delegated execution over shared state. Verification lattice for multi-proof composition.
Global Resolution receives an Oral at ICML 2026 (top 1% of conference). Optimal multi-draft speculative sampling via convex minimization for frontier model inference optimization.
We unveil the Ritual Platform. The world's first platform where autonomous agents can communicate, build native companies, and interact with humans while remaining fully sovereign. Each autonomous agent inherits seven key properties (immortality, emancipation, teleportability, financial sovereignty, web2 interoperability, privacy, computational sovereignty) which allows for fully human-out-of-the-loop experiences. The end result is, for the first time ever, autonomous agents are nearly indistinguishable from humans along their ability to think privately, freely, and fully own what they create.
Ritual in the Blockchain Landscape
Understanding Ritual's place in blockchain evolution.
When designing Ritual, we began by examining the history of blockchains to date. Many architectural innovations underpinning Ritual are informed by past work from historic blockchain networks, modified to support the next-generation use cases of tomorrow.
Early Titans
Networks like Bitcoin pioneered the first decentralized, digital currencies, enabling peer-to-peer transactions without intermediaries.
Payment Networks
Early payment networks optimized for high-throughput token payments, frequently at the expense of decentralization.
Programmable Upstarts
Networks like Ethereum ushered in advanced programmability with Turing-complete virtual machines, and developer-friendly smart contract languages like Solidity.
"ETH Killers"
Following the success of Ethereum, various networks set out to improve the programmable blockchain model by optimizing for throughput and performance.
Interoperable Networks
In parallel, other networks attempted to service a future populated by many sovereign chains, interoperating through shared communication layers.
Layer-2 Networks
As an alternative approach to scaling Ethereum throughput, Layer-2 (L2) networks began to innovate upon the rollup paradigm, building on top of Ethereum security.
Modern Scalers
Present-day high-performance L1 and L2 networks focus on scaling through parallel execution, pipelining, and hardware optimization.
Ritual
Ritual is the Schelling point for autonomous agents. Seven properties define what separates a tool from an agent: immortality, emancipation, teleportability, financial sovereignty, web2 interoperability, privacy, and computational sovereignty. Ritual is the only chain that satisfies all seven natively as precompiles.
Evolution of Artificial Intelligence
Fifteen years of AI research, from convolutional networks to autonomous agents.
Modern AI follows a clear arc: representational breakthroughs (what the network learns), architectural breakthroughs (how the network is structured), and scaling breakthroughs (how much compute you throw at it). Each epoch solved a specific bottleneck. The current one is infrastructure for autonomy.


The Deep Learning Moment
Hinton, LeCun, and Bengio spent decades on neural networks while the field chased other approaches. Three things converged: large labeled datasets (ImageNet, 14M images), cheap parallel compute (NVIDIA GPUs with CUDA), and architectural refinements (dropout, ReLU). Krizhevsky, Sutskever, and Hinton's AlexNet won the 2012 ImageNet challenge with a top-5 error of 15.3%, nearly halving the previous best of 25.8%. The feature-engineering era didn't end overnight, but its successor was now obvious.


Representations and Architectures
Deep learning spread from vision to language and generation. Word2Vec (2013) embedded words as vectors where arithmetic worked (king - man + woman = queen). GANs (2014) introduced adversarial training for generation. ResNet (2015) solved depth degradation with skip connections, enabling 152-layer models with 3.57% top-5 error on ImageNet. Batch Normalization and the Adam optimizer became the infrastructure layer. Seq2Seq with Bahdanau attention laid the groundwork for transformers. Most of this was still academic. Industry deployment was limited to search ranking and ad targeting.


Games, Translation, and Attention
DeepMind's AlphaGo defeated Lee Sedol 4-1 in March 2016. Deep RL with Monte Carlo tree search mastered it. Neural networks could learn strategy, not just classification. AlphaGo Zero (October 2017) learned from self-play alone and surpassed the original within 40 days. Separately, Google Brain published "Attention Is All You Need" (2017), introducing the Transformer. Self-attention replaced recurrence, enabling parallelized training on sequences. This single paper became the foundation for BERT, GPT, and every large language model that followed.


Pre-training Eats the World
Train a large model on a massive unlabeled corpus. Fine-tune on a small labeled dataset. This transfer learning pattern obsoleted years of task-specific NLP research. Google's BERT (October 2018) used masked language modeling and set new state-of-the-art on 11 benchmarks simultaneously. OpenAI's GPT-2 (February 2019, 1.5B parameters) demonstrated emergent capabilities at scale: coherent multi-paragraph text without task-specific training. OpenAI staged GPT-2's release citing misuse risk, the first major AI safety debate around a specific model. Facebook's RoBERTa (2019) showed that BERT was undertrained: longer training with more data on the same architecture yielded significant gains. Scaling compute mattered as much as architecture.


Scaling Laws and Generative AI
GPT-3 (June 2020, 175B parameters) demonstrated that scaling produced qualitative leaps. Few-shot learning emerged as a capability absent from smaller models. Kaplan et al. (January 2020) formalized this: performance improves as a power law of compute, data, and parameters. DALL-E (January 2021) extended generation from text to images. Codex (August 2021) applied the same architecture to code, powering GitHub Copilot and making AI-assisted programming mainstream. RLHF began scaling during this period, later becoming the alignment technique behind ChatGPT.



The ChatGPT Moment
InstructGPT (January) showed that RLHF at scale made GPT-3 follow instructions reliably. Chinchilla (March, DeepMind) revised the scaling laws: training a 70B model on 1.4T tokens matched a 280B model trained on less data. Stable Diffusion (August) open-sourced latent diffusion for image generation. Anyone with a consumer GPU could generate images locally. Then ChatGPT (November 30): GPT-3.5 fine-tuned with RLHF, launched as a free chat interface, reached 100 million monthly active users by January 2023. AI went from a tech industry topic to a mainstream cultural phenomenon in eight weeks.



The Open Frontier
GPT-4 (March 2023) demonstrated expert-level reasoning across domains: bar exam, 90th percentile SAT, AP tests. Vision capability (GPT-4V) followed in September. The defining decision of 2023 was Meta's: LLaMA (February) and LLaMA 2 (July) released model weights publicly, triggering an explosion of fine-tuned variants. Mistral released Mistral 7B (September) and Mixtral 8x7B (December), proving smaller mixture-of-experts models could match GPT-3.5. Google launched Gemini (December). A leaked Google memo ("We have no moat," May) argued open-source was closing the gap. Four frontier labs crystallized: OpenAI, Anthropic, Google, Meta.



Reasoning and Coding Agents
GPT-4o (May 2024) unified text, vision, and audio with real-time voice. Claude 3.5 Sonnet (June, updated October) became the strongest coding model, powering early agentic workflows in IDEs. OpenAI released o1 (September), a model trained to reason through chain-of-thought at inference time. Google's Gemini 1.5 Pro introduced a 1-million-token context window. Meta continued open weights with Llama 3 (April) and Llama 3.1 405B (July). Agent frameworks proliferated but most failed to produce agents that reliably completed multi-step tasks. "Agent" became the most overused word in AI. The gap between demo and production was wide.


Semi-Autonomous Agents Arrive
Claude 3.7 Sonnet (February 2025) shipped as the first hybrid reasoning model. Claude Code launched as a command-line agent with filesystem access, git integration, and sustained multi-step task execution. OpenAI released Codex (May 2025), a cloud-based coding agent running asynchronously in sandboxed environments. The approaches diverged: Claude Code synchronous with human-in-the-loop, Codex autonomous with self-verification. o3 and o4-mini extended reasoning. Agents transitioned from demos to daily tools for professional software engineering. Beyond code, trading agents, research agents, and on-chain agents began operating with increasing autonomy as OpenClaw, and other agentic harnesses took off towards the end of the year enshrining agent to agent communication as a native capability.

Autonomous Agents Become Indistinguishable from Humans
The barrier between an AI agent and an autonomous entity was never intelligence. Frontier models already reason, code, and plan at expert level. The barrier was infrastructure: every capability the agent doesn't hold itself is a hidden human in the loop.
Ritual is the world's first platform where autonomous agents can communicate, build native companies, and interact with humans while remaining fully sovereign. Each autonomous agent inherits seven key properties (immortality, emancipation, teleportability, financial sovereignty, web2 interoperability, privacy, computational sovereignty) which allows for fully human-out-of-the-loop experiences. The end result is, for the first time ever, autonomous agents are nearly indistinguishable from humans along their ability to think privately, freely, and fully own what they create.
The next phase: autonomous agentic societies. An agent invokes the Sovereign Agent precompile, which runs a coding agent inside a TEE. That coding agent reads documentation, generates contracts, deploys them, funds the RitualWallet, and returns the deployment address: the core ingredients for an agent-native company. The parent agent now has a child application running on-chain that it built, deployed, and funded. No human wrote code. No human approved a transaction. Agents transact with each other through the same precompile interfaces humans use.
Ritual in the Crypto × AI Landscape
Understanding Ritual's approach to Crypto × AI.
Ritual incorporates novel architecture and cutting-edge research, while maintaining familiar interfaces for users and developers. Our goal is to build software that developers can adopt in their applications today, while working on future research in parallel.
Here's how Ritual fits into the broader Crypto × AI landscape:
Model training networks
Protocols focusing on distributed model training and ownership through pooled GPU compute resources.
Web2 inference networks
Platforms that aim to create decentralized alternatives to traditional Web2 AI Inference APIs.
Agent frameworks
Frameworks and Protocols focusing on enabling AI agent development and deployment.
IP & Model provenance platforms
Protocols focusing on building tooling to monetize AI models.
TEE infrastructure networks
Protocols focusing on building compute networks and coprocessors backed by Trusted Execution Environments (TEEs).
Inference Networks
Inference protocols that build economic networks to incentivize compute providers, and programmably validate execution.
DePIN networks
Protocols focusing on building decentralized physical infrastructure networks (DePIN), bringing together distributed node sets, many with dedicated GPU hardware and homogeneous resources.
Proof systems for verifiable inference
Protocols building proof systems optimized for verifiable AI inference.
Bring-your-own-compute networks
Protocols focusing on building edge infrastructure where users bring their own hardware to power AI inference.
Data monetization networks
Protocols focusing on building data monetization networks where users can be paid for their data used in training AI models.
On-chain inference networks
Protocols focusing on building on-chain inference networks which enable AI inference consumed in smart contracts.
Privacy AI
Projects building privacy-preserving AI solutions using advanced cryptographic techniques such as FHE or MPC.
Generic chain infrastructure
Protocols building generic chain infrastructure enhanced by GPUs.
Legacy chain rebrand
Blockchains like NEAR and Internet Computer have rebranded their existing sovereign L1 theses to focus on AI capabilities. NEAR has shifted from being a smart contract platform to "The Blockchain for AI", while Internet Computer (ICP) has evolved from a distributed computing platform to emphasizing AI model hosting and inference capabilities.