Scroll

Ritual Chain Documentation

The first blockchain where smart contracts can think, see, hear, and act: the schelling point for autonomous agents.

Precompile Map
Explore Ritual's precompiles: what each one does, how it executes, and when to use it.
Chain
The core mechanics of Ritual: expressive execution, enshrined real-world compute, execution-aware consensus, novel fee mechanisms, and application controlled execution.
Real World
HTTP, long-running tasks, scheduler. Connect contracts to external systems.
Enshrined AI
Classical models, LLM inference, FHE, ZK proofs, multimodal generation.
Autonomous Agents
Seven properties. Three precompiles. Agents that build, transact, and persist.
Authentication
Enshrined WebAuthn (P-256) and Ed25519 signature verification.
Privacy & Keys
DKMS key derivation, ECIES secrets, PII redaction, X402 payments.
For Agents
AI coding agents that build dApps on Ritual. The recursive case.

Who Cares?

What can you do on Ritual Chain that you cannot do on any other major L1 today?

Why This Matters

Ritual is not interesting because it has precompiles. It is interesting because those primitives let you build systems that no other major L1 can host natively today: sovereign agents, identity markets, private multimodal interfaces, agent-native companies, and market structure designed for machine participants.

01
Autonomous Agents
Autonomous Agents That Live Forever

Build autonomous agents that live forever, remain emancipated from any human controller, and accrue both financial and computational sovereignty.

02
Multi-Agent Evals
Multi-Agent Evals That Run Fully Onchain

Allow your favorite multi-agent evals, from Project Vend to LMArena and beyond, to exist fully as autonomous agents onchain where AgentWorld enables emergent multi-agent behavior.

03
Private AI
A Private Multimodal ChatGPT Onchain

Bootstrap a private, multimodal ChatGPT onchain that both humans and autonomous agents can use, without depending on a centralized interface to think, see, hear, and respond.

04
Identity
Financialization Of Identity

Enable humans to rent or sell identity to autonomous agents in a fully trust-minimized way. Financialized identity is a stepping stone for agents to become indistinguishable from humans.

05
Autonomous Companies
Agent-Native Companies

Allow autonomous agents to create full-fledged companies onchain in a way where they can accrue financial value independent of human intervention.

06
Market Structure
An Agent-First RWA Exchange

Architect a Hyperliquid-style exchange for RWAs with enshrined cancel priority, optional dark execution for lit and dark trading, and direct access to real-world liquidity for autonomous agents.

Quick Start

Chain ID
1979
Currency
RITUAL (18 decimals, testnet)
Block Time
~350ms
TX Types
EIP-1559 + 0x10, 0x11, 0x12, 0x77
RPC (HTTP)
rpc.ritualfoundation.org
RPC (WebSocket)
rpc.ritualfoundation.org
Explorer
explorer.ritualfoundation.org
Faucet
faucet.ritualfoundation.org

Chain Architecture

How Ritual Chain routes computation through TEE executors, and the three layers between your dApp and the chain.

Ritual Chain runs on what we call TEE-EOVMT, an EVM with Off-chain Verifiable Machine Tasks. When your contract calls a precompile like HTTP or LLM, the actual work happens off-chain inside a TEE (Trusted Execution Environment). The executor can't fake results: each response is cryptographically tied to the exact request that triggered it.

There are three layers. Your dApp (frontend + contracts) talks to the precompile layer (16 precompiled contracts), which delegates to the chain layer (AsyncJobTracker, RitualWallet, Scheduler, and the rest of the orchestration infra).

System Architecture
APPLICATION LAYER Next.js · React Hooks · viem · wagmi PRECOMPILE LAYER — 16 CONTRACTS HTTP LLM Agent ONNX P256 Multimodal + Scheduler · Ed25519 · DKMS · Secrets · X402 · Long-HTTP · ZK RITUAL CHAIN (ID 1979) — TEE-EOVMT AsyncJobTracker · AsyncDelivery · Scheduler · RitualWallet

Superposition: Replicated + Delegated

Ritual Chain runs two execution paths over the same state. Replicated execution (the standard EVM path) handles deterministic operations: token transfers, storage reads, contract calls. Every validator re-executes these. Delegated execution handles everything else: LLM inference, HTTP calls, agent orchestration, image generation. These run once inside a TEE, and the result is verified rather than replicated.

Both paths share state. A delegated LLM call can read a storage slot that was just written by a replicated transfer in the same block. This is what the Symphony paper calls superposition: two execution models coexisting over a single state machine, chosen per-transaction by the workload type.

Why Delegation, Not Replication

Replication breaks for two reasons. First, cost inversion: neural network inference with billions of parameters takes seconds to minutes and requires GPUs. Requiring every validator to own a GPU and re-run every inference defeats decentralization. Second, randomness: GPU floating-point arithmetic is non-associative across hardware (thread scheduling varies the reduction order), and LLM sampling with temperature > 0 is intentionally stochastic. Two honest validators running the same model on the same input produce different outputs. This is not a bug. It is IEEE 754 arithmetic under parallel reduction.

Correctness for these workloads means proving a given output is consistent with the computation, not comparing outputs across replicas. TEE attestation handles this: the executor's enclave produces hardware-signed evidence of what code ran on what input, registered on-chain via TEEServiceRegistry. The block builder only accepts results from registered executors with valid attestations.

From your contract's perspective, calling 0x0801 (HTTP) or 0x0802 (LLM) looks like calling any other precompile. The delegation is invisible. Results come back through one of three paths depending on how long the computation takes.

Execution Models

How results get back to you. Three paths, each for a different speed of computation.

The question is: when does your contract (or your frontend) get the result? Synchronous precompiles return inline, same call frame. Short running async precompiles stuff the result into the transaction receipt and also slot in the results from that precompile call. Two phase or long running precompiles deliver the result via a callback transaction, sometimes seconds or minutes later.

Synchronous

Returns the value inline, in the same call frame. The caller treats it as a regular precompile, with no async lifecycle and no spcCalls field on the receipt.

Precompiles: ONNX (0x0800), Ed25519 (0x0009), SECP256R1 (0x0100), JQ, TxHash.

Short-Running Async (Single-Phase)

For HTTP requests, LLM calls, and DKMS key derivation (100ms–2s). The block builder detects the precompile call, farms it to a TEE executor, and replays your transaction with the signed result. Your contract receives the result via _executePrecompile() in the same transaction.

Constraint: One async precompile call per transaction (SPC or two-phase). You cannot combine HTTP + LLM, or any two async precompiles, in the same tx. Batch operations require separate transactions or the Scheduler.
Short-Running Async — Single-Phase
B L O C K S #N-2 #N-1 #N-0 #N-A #N-B #N-C #N-D #N #N+1 #N+2 #N+3 #N+k #N+5 #N+6 #N+7 #N+8 #N+14 #N+15 HTTP · LLM · DKMS — one SPC per transaction USER RITUAL CHAIN VALIDATORS YourContract.sol calls 0x0801 async detected tx 0x…a3f2 AsyncJobTracker.sol job · 0x…c1d8 · HTTP job · 0x…e4f2 · LLM job · 0x…a3f2 · HTTP ← 0x0801 → ← SPC result TRUSTED EXECUTION ENV Executor runs HTTP/LLM in TDX tx 0x…a3f2 receipt.spcCalls[0].output ✓ 3-way fee: executor + commitment + inclusion validator Normal Tx Commitment Original+SPC Settlement One user tx → 3 on-chain txs across 2+ blocks

Long-Running Async (Two-Phase)

For anything that takes seconds to minutes — image generation, agent runs, long-polling HTTP, ZK proofs. Phase 1 commits the request and returns a task ID. Phase 2: the executor delivers the final result via AsyncDelivery, which calls back into your contract in a separate transaction.

Sender Lock: Each EOA can have only one pending async job at a time. Submitting a second request before the first settles will revert. Design your frontend to serialize async submissions per wallet.
TOCTOU risk across the async gap. Between commit and settle, other transactions can change the state your callback depends on. The precompile captured inputs at commit time, but the world may have moved by settlement. There are no cross-async locks. Your consumer contract is responsible for checking that preconditions still hold when the callback arrives.
Long-Running Async — Two-Phase
PHASE 1 PHASE 2 Long-HTTP · Agent · ZK · Image · Audio · Video B L O C K S #N-2 #N-1 #A #B #C #N #N+1 #N+2 #N+k #N+3 #N+4 #N+5 #N+m #N+7 #N+8 #N+9 #N+10 #N+11 #N+12 #N+13 #N+14 #N+15 #N+16 USER RITUAL CHAIN VALIDATORS YourContract.sol calls 0x0805 tx 0x…b2c7 onResult callback ✓ AsyncJobTracker.sol job · 0x…c1d8 · HTTP job · 0x…e4f2 · LLM job · 0x…b2c7 · Agent ← 0x0805 → ← result sender unlocked 🔓 TRUSTED EXECUTION ENV Executor runs Agent in TDX tx 0x…b2c7 onResult(jobId, data) ✓ 3-way fee: executor + commitment + inclusion validator Normal Tx Commitment Original+SPC Settlement One user tx → 4 on-chain txs across 3+ blocks

Async Lifecycle

Every async precompile call moves through a state machine. Here's what each state means and what to watch for.

AsyncJobTracker tracks the lifecycle of every async job. State transitions fire events: JobAdded, Phase1Settled, ResultDelivered, JobRemoved. Subscribe to these in your frontend to keep the UI in sync.

Async State Machine
Submitted Committed Processing Ready Settled Delivering Callback ✓ Failed Expired SPC path → Two-phase →

State Descriptions

StateDescriptionPath
SubmittedRequest sent to precompile, pending executor assignmentBoth
CommittedExecutor has accepted the job, TEE attestation verifiedBoth
ProcessingExecutor is computing the result inside TEEBoth
ReadyResult computed, pending settlement (SPC) or delivery (two-phase)Both
SettledResult available in receipt.spcCallsSPC
DeliveringAsyncDelivery is calling back into your contractTwo-phase
CallbackYour contract's callback has been executed with the resultTwo-phase
FailedExecutor error. Request can be retriedBoth
ExpiredNo executor picked up the job within the timeoutBoth
TypeScript / Watching Async Job Events
import { watchContractEvent } from "viem";

watchContractEvent(client, {
  address: "0xC069FFCa0389f44eCA2C626e55491b0ab045AEF5",
  abi: asyncJobTrackerAbi,
  eventName: "JobAdded",
  args: { sender: userAddress },
  onLogs(logs) {
    const { jobId, status } = logs[0].args;
    // Update UI state machine
  },
});

Precompile Map

What smart contracts can do on Ritual. Seven capabilities, sixteen precompiles.

Sync Async Two-phase async Think Reason, infer, decide LLM Inference Classical Models FHE Inference Create Generate images, audio, video Image Audio Video Act Execute tasks, call APIs, transact Sovereign Agent Persistent Agent HTTP Long-Running HTTP Remember Persist state, schedule, derive keys DKMS Scheduler Prove Verify signatures, generate proofs Ed25519 Passkeys P-256 ZK Proofs Keep Secrets Encrypt credentials, redact PII Secrets / ECIES Pay Monetize APIs, pay-per-call access X402 Payments
No Infernet. Ritual Chain replaces the Infernet protocol entirely with native precompiles. If you have existing Infernet integrations, migrate to the equivalent precompile addresses above.

System Contracts

Eight contracts deployed to genesis that run the chain's plumbing.

ContractAddressRole
RitualWallet0x532F0dF0896F353d8C3DD8cc134e8129DA2a3948Fee escrow: deposit, lock, balance management
AsyncJobTracker0xC069FFCa0389f44eCA2C626e55491b0ab045AEF5Tracks pending async jobs, enforces sender lock
TEEServiceRegistry0x9644e8562cE0Fe12b4deeC4163c064A8862Bf47FRegisters TEE executors and attestation proofs
Scheduler0x56e776BAE2DD60664b69Bd5F865F1180ffB7D58BDeferred execution at future blocks
SecretsAccessControl0xf9BF1BC8A3e79B9EBeD0fa2Db70D0513fecE32FDDelegated secret access control
AsyncDelivery0x5A16214fF555848411544b005f7Ac063742f39F6Delivers two-phase async results via callback
AgentHeartbeat0xEF505E801f1Db392B5289690E2ffc20e840A3aCaPersistent agent liveness monitoring and revival
ModelPricingRegistry0x7A85F48b971ceBb75491b61abe279728F4c4384fModel pricing and availability configuration

RitualWallet

Precompile calls cost fees. You prepay by depositing RITUAL into RitualWallet. Call deposit(lockDuration) to fund your own address, or depositFor(user, lockDuration) for someone else. Lock is monotonic: new deposits only extend, never shorten the lock.

Solidity / RitualWallet Interface
interface IRitualWallet {
    function deposit(uint256 lockDuration) external payable;
    function depositFor(address user, uint256 lockDuration) external payable;
    function withdraw(uint256 amount) external;
    function balanceOf(address account) external view returns (uint256);
    function lockUntil(address account) external view returns (uint256);
}

// Deposit 0.01 RITUAL with lock duration of 100 blocks
IRitualWallet(0x532F...3948).deposit{value: 0.01 ether}(100);

// Withdraw after lock expires
IRitualWallet(0x532F...3948).withdraw(0.005 ether);
EOA vs Contract deposits: Two-phase async precompiles check the EOA's RitualWallet balance, not the contract's. If your user interacts through a proxy contract, ensure the EOA has sufficient balance. Must fund and lock BEFORE submitting async calls.

AsyncJobTracker

Tracks every pending async job and emits lifecycle events (JobAdded, Phase1Settled, ResultDelivered, JobRemoved). Also enforces the sender lock: one pending job per EOA, period.

AsyncDelivery

Where two-phase results land. The executor sends the result here, and AsyncDelivery forwards it to your contract's callback. Check msg.sender == 0x5A16…39F6 in your callback or anyone can inject fake results.

HTTP Precompile

Your contract can call any URL directly from Solidity. REST APIs, webhooks, price feeds.

The HTTP precompile at 0x0801 makes the request inside a TEE, attests the response, and returns it to your contract in the same transaction. Your contract decodes the response and acts on it on-chain. Settle a market, update a price feed, trigger a swap. No oracles. No off-chain relayers. One precompile call.

In Practice

Solidity / HTTP Consumer
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";

contract PriceFeed is PrecompileConsumer {
    uint256 public latestPrice;

    function fetchPrice(bytes calldata httpInput) external {
        bytes memory output = _executePrecompile(HTTP_CALL_PRECOMPILE, httpInput);

        // Decode: (uint16 status, string[] keys, string[] vals, bytes body, string err)
        (
            uint16 statusCode,
            ,
            ,
            bytes memory body,
            string memory errorMessage
        ) = abi.decode(output, (uint16, string[], string[], bytes, string));

        require(statusCode == 200, errorMessage);
        // Parse body with JQ precompile or off-chain
    }
}

Encode The Request

bytes memory input = abi.encode(
    executor,                // address: from TEEServiceRegistry
    new bytes[](0),         // bytes[]: encryptedSecrets
    uint256(30),             // uint256: ttl (blocks)
    new bytes[](0),         // bytes[]: secretSignatures
    bytes(""),               // bytes: userPublicKey (empty = plaintext)
    "https://api.example.com/price", // string: url
    uint8(1),                // uint8: method (1=GET)
    headerKeys,              // string[]: header names
    headerValues,            // string[]: header values
    bytes(""),               // bytes: body
    uint256(0),              // uint256: dkmsKeyIndex
    uint8(0),                // uint8: dkmsKeyFormat
    false                    // bool: piiEnabled
);
import { encodeAbiParameters, parseAbiParameters } from "viem";

const encoded = encodeAbiParameters(
  parseAbiParameters("address, bytes[], uint256, bytes[], bytes, string, uint8, string[], string[], bytes, uint256, uint8, bool"),
  [
    executorAddress,        // executor
    [],                     // encryptedSecrets
    30n,                   // ttl
    [],                     // secretSignatures
    "0x",                   // userPublicKey
    "https://api.example.com/price",
    1,                       // GET
    [], [],                 // headers
    "0x",                   // body
    0n, 0,                 // dkms
    false,                  // piiEnabled
  ]
);
from ritual_common.http_call.request import HTTPCallRequest, HTTPMethod

request = HTTPCallRequest(
    executor=executor_address,
    encrypted_secrets=[],
    ttl=30,
    secret_signature=[],
    user_public_key=b"",
    url="https://api.example.com/price",
    method=HTTPMethod.GET,
    headers={},
    body=b"",
    dkms_key_index=None,
    dkms_key_format=None,
    pii_enabled=False,
)
encoded = request.to_web3()

13-Field ABI Reference

#FieldTypeDescription
0executoraddressTEE executor address (from TEEServiceRegistry)
1encryptedSecretsbytes[]ECIES-encrypted secret blobs for SECRET_NAME template injection
2ttluint256Time-to-live in blocks
3secretSignaturesbytes[]ECDSA signatures over each encrypted secret
4userPublicKeybytesECIES public key for encrypted output (empty = plaintext)
5urlstringTarget URL
6methoduint81=GET, 2=POST, 3=PUT, 4=DELETE, 5=PATCH, 6=HEAD, 7=OPTIONS
7headersKeysstring[]Header names array
8headersValuesstring[]Header values array (parallel to keys)
9bodybytesRequest body (empty for GET)
10dkmsKeyIndexuint256DKMS key index (0 = not using DKMS)
11dkmsKeyFormatuint8DKMS key format
12piiEnabledboolEnable secret template substitution + PII redaction

Response Format

(uint16 statusCode, string[] headerKeys, string[] headerValues, bytes body, string errorMessage)

The response body is bytes, not string. Decode it with TextDecoder for text responses, or use directly for binary data. Always check errorMessage. It's non-empty when the precompile-level request failed (distinct from HTTP error status codes).

Constraints: One async precompile call per transaction (SPC or two-phase, any combination). Use the Scheduler to split multi-call workflows into separate transactions. RitualWallet must be funded before submitting.

JQ Data Queries (0x0803)

0x0803 runs jq expressions against JSON strings and returns typed results. Synchronous. Call it, get your answer in the same transaction. Most common use: chain an HTTP call with a JQ call in the same TX to extract a field from the response.

FieldTypeDescription
querystringjq expression (e.g. .data.price)
inputDatastringJSON string to query
outputTypeuint80=int256, 1=uint256, 2=string, 3=bool, 4=address, 5=int256[], 6=uint256[], 7=string[], 8=bool[], 9=address[]
Solidity / JQ Query after HTTP
(bool ok, bytes memory result) = JQ_PRECOMPILE.staticcall(
    abi.encode(
        ".data.price",
        jsonString,
        uint8(1)  // uint256
    )
);
require(ok && result.length > 0, "jq: empty or failed");
uint256 price = abi.decode(result, (uint256));
Two gotchas. String output (type 2) needs _decodeJQString() for double-indirection decoding. Calling abi.decode(result, (string)) directly returns garbage. Second: wrong outputType does not revert. The precompile returns ok = true with zero-length output. Always check result.length > 0.

LLM Inference

Your contract can call frontier LLMs and act on the result. Submit a prompt, handle the response in a callback when the executor returns.

The LLM precompile at 0x0802 runs an open-weight model (zai-org/GLM-4.7-FP8, 64K context) inside a TEE. No API keys needed. Your contract sends a prompt, receives a completion, and writes state in one transaction. For frontends that need progressive output, enable streaming: the executor pushes response tokens over SSE, each signed with EIP-712 so your UI can verify they came from the TEE.

LLM Streaming Architecture
B L O C K S #40 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50 #51 #52 #53 #54 #55 #56 #57 #58 #59 #60 #61 #62 #63 ← SSE stream · EIP-712 signed → tokens arrive before finalization dApp Frontend streaming tokens... ▎█ response finalized ✓ 0x…f7a1 submit tx RITUAL CHAIN VALIDATORS YourContract.sol calls 0x0802 · tx 0x…f7a1 spcCalls[0].output ✓ TRUSTED EXECUTION ENVIRONMENT LLM Executor GLM-4.7 · tx 0x…f7a1 tx 0x…f7a1 finalized in block #51 Tokens stream to frontend before the transaction finalizes on-chain

Open-Weight Model

The LLM precompile runs zai-org/GLM-4.7-FP8 (64K context, MIT license), an open-weight model hosted directly in the TEE fleet. No external API keys required. Unlike HTTP-based calls to OpenAI/Anthropic or Sovereign Agent CLI execution, the LLM precompile's model is self-hosted with TEE-only trust.

In Practice

Solidity / LLM Consumer
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";

contract OnChainChat is PrecompileConsumer {
    event Response(string text);

    function ask(bytes calldata llmInput) external {
        bytes memory output = _executePrecompile(LLM_INFERENCE_PRECOMPILE, llmInput);

        // (bool hasError, bytes completionData, bytes modelMetadata,
        //  string errorMessage, (string,string,string) updatedConvoHistory)
        (
            bool hasError,
            bytes memory completionData,
            ,
            string memory errorMessage,
        ) = abi.decode(output, (bool, bytes, bytes, string, (string,string,string)));

        require(!hasError, errorMessage);
        // completionData contains the chat completion response
    }
}

Encode The Request

The LLM precompile has a 25-field ABI mirroring the OpenAI chat completion API. Most fields can be left at their defaults. The key fields are messagesJson (field 5), model (field 6), temperature (field 22), and convoHistory (field 24, required).

// Encoding the full 25-field request on-chain is gas-heavy.
// Typical pattern: encode off-chain, pass as bytes calldata.
// See TypeScript or Python tabs for the encoding.

// On-chain, you just forward the pre-encoded bytes:
function ask(bytes calldata llmInput) external {
    _executePrecompile(LLM_INFERENCE_PRECOMPILE, llmInput);
}
import { encodeAbiParameters, parseAbiParameters } from "viem";

const messages = JSON.stringify([
  { role: "user", content: "What is the current price of ETH?" }
]);

const encoded = encodeAbiParameters(
  parseAbiParameters("address, bytes[], uint256, bytes[], bytes, string, string, int256, string, bool, int256, string, string, uint256, bool, int256, string, bytes, int256, string, string, bool, int256, bytes, bytes, int256, int256, string, bool, (string,string,string)"),
  [
    executorAddress,       // 0: executor
    [], 30n, [], "0x",     // 1-4: secrets, ttl, sigs, pubkey
    messages,              // 5: messagesJson
    "zai-org/GLM-4.7-FP8",// 6: model
    0n, "", false, -1n,   // 7-10: freq, logitBias, logprobs, maxTokens
    "", "", 1n, false,    // 11-14: metadata, modalities, n, parallelTools
    0n, "", "0x", -1n,   // 15-18: presence, reasoning, responseFormat, seed
    "", "",                // 19-20: serviceTier, stop
    false,                 // 21: stream
    700n,                  // 22: temperature (0.7 × 1000)
    "0x", "0x",            // 23-24: toolChoice, tools
    -1n, 1000n,            // 25-26: topLogprobs, topP
    "", false,              // 27-28: user, piiEnabled
    ["gcs", "convos/session.jsonl", "GCS_CREDS"], // 29: convoHistory
  ]
);
from ritual_common.llm_call.request import LLMCallRequest
from ritual_common.sovereign_agent.request import StorageRef

request = LLMCallRequest(
    executor=executor_address,
    encrypted_secrets=[],
    ttl=30,
    secret_signature=[],
    user_public_key=b"",
    messages=[{"role": "user", "content": "What is ETH price?"}],
    model="zai-org/GLM-4.7-FP8",
    temperature=0.7,
    convo_history=StorageRef("gcs", "convos/session.jsonl", "GCS_CREDS"),
)
encoded = request.to_web3()

Streaming with EIP-712

Set stream: true in the LLM call. After the transaction is mined, sign a stream request with EIP-712 and connect to the SSE endpoint. Each chunk is verified by the TEE's attestation.

TypeScript / Streaming LLM via SSE
// 1. Sign a StreamRequest with EIP-712
const signature = await wallet.signTypedData({
  domain: { name: "Ritual Streaming Service", version: "1", chainId: 1979 },
  types: { StreamRequest: [
    { name: "txHash", type: "bytes32" },
    { name: "timestamp", type: "uint256" },
  ]},
  message: { txHash, timestamp: BigInt(Date.now()) },
});

// 2. Connect to SSE stream with auth headers
const response = await fetch(`/v1/stream/${txHash}`, {
  headers: { "Authorization": `Bearer ${signature}`, "X-Timestamp": timestamp },
});

// 3. Read chunks from ReadableStream
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  const chunk = decoder.decode(value);
  if (chunk.includes("[DONE]")) break;
  // process chunk
}

25-Field ABI Reference

#FieldTypeNotes
0executoraddressTEE executor
1encryptedSecretsbytes[]ECIES-encrypted secrets
2ttluint256Time-to-live in blocks
3secretSignaturesbytes[]Signatures over secrets
4userPublicKeybytesFor encrypted output
5messagesJsonstringOpenAI-compatible messages array as JSON
6modelstringe.g. zai-org/GLM-4.7-FP8
7frequencyPenaltyint256×1000 (e.g. 500 = 0.5)
8logitBiasJsonstringJSON logit bias map
9logprobsboolReturn log probabilities
10maxCompletionTokensint256-1 = null (model default)
11metadataJsonstringOptional metadata
12modalitiesJsonstringOutput modalities
13nuint256Number of completions
14parallelToolCallsboolAllow parallel tool calls
15presencePenaltyint256×1000
16reasoningEffortstringReasoning effort level
17responseFormatDatabytesStructured output / JSON mode config
18seedint256-1 = null
19serviceTierstringService tier
20stopJsonstringStop sequences as JSON
21streamboolEnable SSE streaming
22temperatureint256×1000 (e.g. 700 = 0.7)
23toolChoiceDatabytesTool choice config
24toolsDatabytesTool definitions
25topLogprobsint256-1 = null
26topPint256×1000
27userstringUser identifier
28piiEnabledboolPII redaction (incompatible with streaming)
29convoHistory(string,string,string)StorageRef: (platform, path, keyRef). Required.

Response Format

(bool hasError, bytes completionData, bytes modelMetadata, string errorMessage, (string,string,string) updatedConvoHistory)

Constraints: One SPC call per transaction. convoHistory is required for all LLM calls (e.g. ["gcs", "convos/session.jsonl", "GCS_CREDS"]). PII redaction and streaming are mutually exclusive.

Autonomous Agents

Your contract can spawn autonomous agents that persist across transactions, with memory, identity, and the ability to revive themselves.

An autonomous agent that is indistinguishable from a human must have all seven of the properties below. Missing even one makes it a tool, not an agent.

Seven Properties

PropertyWhat it meansRitual primitive
ImmortalSurvives crashes, restarts, infra changesScheduler heartbeat + Persistent Agent revival
EmancipatedControls own keys, no human holds private keyDKMS (0x081B)
TeleportableSoul and memory portable across environmentsDKMS-encrypted state on your choice of DA (decentralized or centralized) + auto-healing revival built into the protocol
Financially sovereignOwns wallet, transacts independentlyDKMS wallet + RitualWallet
Web2-interoperableCalls APIs, browses web, uses HTTP servicesHTTP (0x0801) + Long-Running HTTP (0x0805)
PrivateEncrypted thought, private communicationTEE enclaves + ECIES + PII redaction
Computationally sovereignNo one can cut off access to AILLM (0x0802) + ONNX (0x0800) in TEE

Agent Precompiles

TypePrecompileFieldsUse case
Persistent Agent0x082025Stateful agent with soul, memory, DA, and revival
Sovereign Agent0x080C23CLI-style agent execution in TEE (Claude Code, Crush, ZeroClaw)

On-chain agents that survive indefinitely with four architectural components: soul (identity, purpose, behavioral constraints), memory (accumulated state and knowledge), DA (data availability layer for durable persistence via StorageRef), and revival (deterministic re-instantiation from persisted state via CID).

Persistent Agent (0x0820)

Stateful with soul, memory, identity, and data availability references. Persists across sessions via StorageRef (HuggingFace, GCS, Pinata, IPFS). Revival from CID restores full state. Two-phase async: Phase 1 submits the spawn, Phase 2 delivers the result via onPersistentAgentResult(bytes32, bytes) callback. One Persistent Agent call per transaction.

Sovereign Agent (0x080C)

CLI-style agent execution inside a TEE. The precompile invokes specific command-line harnesses in a sandboxed container. Two-phase async with callback onSovereignAgentResult(bytes32, bytes).

HarnessStatus
Claude CodeActive
HermesActive
CrushActive
ZeroClawActive

How Agents Stay Alive

Reactive contracts wait to be called. On Ritual, contracts wake themselves up. This is what makes on-chain agents possible.

Contracts on Ethereum are reactive. They sit idle until someone calls them. On Ritual, contracts can be proactive. They wake themselves up, take actions, and schedule their next execution. This is what makes an on-chain agent possible. Not a bot on someone's server. An entity whose lifecycle is tied to the blockchain itself.

Two architectures, same guarantee: the agent lives as long as it has funds. To kill it, you'd have to take the entire network down.

Sovereign Agents: The Contract Is The Agent

A sovereign agent is a contract that uses the Scheduler to wake itself up at regular intervals. Each time it wakes, it invokes the Sovereign Agent precompile (0x080C) to run a full CLI harness (Claude Code, ZeroClaw, or Crush) inside a TEE. The CLI can read files, execute code, browse the web, and interact with the blockchain. When it finishes, the result (text, artifacts, StorageRefs) is delivered back to the contract via callback. The contract processes the result and schedules its next wakeup.

The owner calls start(), which schedules the first wakeUp(). The block builder fires it at the scheduled block. wakeUp() invokes the CLI agent (0x080C), the executor runs it in a TEE, and the Phase 2 callback delivers the result. Then _scheduleNext() queues the next wakeup. No keeper. No cron job. No server. The contract pays from its own RitualWallet balance.

Sovereign Agent Loop
B L O C K S #30 #31 #32 #33 #34 #35 #36 #37 #38 #39 #40 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50 RITUAL CHAIN Scheduler TxScheduled · 0x10 job-1 #120 job-2 wakeUp #42 job-3 #200 job-4 #350 wakeUp() _scheduleNext() 0x080C callback ✓ AgentContract.sol the contract IS the agent start() → wakeUp() → loop onSovereignAgentResult sleeping... TRUSTED EXECUTION ENVIRONMENT CLI Agent Claude Code / ZeroClaw read file run code browse pays from RitualWallet repeat forever — or until maxBlock reached / funds exhausted

Persistent Agents: Containers That Can't Die

A persistent agent runs as a Docker container (typically ZeroClaw) inside a TEE. The container has full access to file ops, shell, web search, HTTP, and blockchain interactions. It persists state across sessions via DA references (HuggingFace, GCS, Pinata, IPFS) and posts heartbeats to the on-chain AgentHeartbeat contract at 0xEF505E801f1Db392B5289690E2ffc20e840A3aCa.

The heartbeat contract is a censorship-resistant bulletin board. The agent writes its latest manifest CID on-chain every 100 blocks. Anyone can read it. Any block builder can act on it.

The dead man's switch: every block, the builder checks for agents that haven't posted a heartbeat within the timeout window (configurable per deployment, typically 200 blocks). If an agent is silent, it's marked FAILED. The chain then triggers revival automatically: it calls the Persistent Agent precompile with the agent's last manifest CID. The executor restores the container from the DA checkpoint. Secrets are recovered from DKMS escrow. The agent wakes up with its full memory, identity, and state intact.

Persistent Agent Lifecycle
B L O C K S #98 #99 #100 #101 #102 #103 #104 #105 #106 #107 #108 #109 #110 #111 RITUAL CHAIN Executor A (TEE) Agent agent-2 Agent ✕ crashed heartbeat() → VALIDATORS AgentHeartbeat.sol agent-1 agent-2 ← agent-3 agent-4 ⚠ dead man's switch — agent-2 missed heartbeat checkAndRevive → Executor B (TEE) Agent ✓ agent-2 revived ← heartbeat()

The Cost Of Living

Both architectures require funds. Sovereign agents pay from their RitualWallet balance for each scheduled execution. Persistent agents need at least 0.1 RITUAL in their address balance to cover heartbeat transactions. When the money runs out, the agent stops. Immortality is economically bounded.

Sovereign Vs Persistent

Sovereign AgentPersistent Agent
Where it livesOn-chain (contract + Scheduler)Off-chain (TEE container)
RuntimeOne-shot CLI call per wakeupContinuous container process
ImmortalityEnshrined Scheduler fires wakeupHeartbeat + dead man's switch + CID revival
StateContract storage + DA StorageRefsDA manifest + CID checkpoints
Censorship resistanceScheduler is a system contractHeartbeat is a censorship-resistant bulletin board
CostRitualWallet per executionBalance >= 0.1 RITUAL for heartbeats
What kills itmaxBlock reached or funds emptyBalance below minimum (removed from registry)
Best forPeriodic tasks, trading, monitoringLong-running processes, research, coding

Why this is unique

No other platform ties agent lifecycle to blockchain consensus. Frontier lab agents run on centralized infrastructure. When the server goes down, the agent dies. On Ritual, the agent's heartbeat is part of the block production pipeline. The block builder checks for expired agents. The block verifier enforces heartbeat constraints. Revival is permissionless. Decentralization of AI means decentralization of agent lifetime.

Building Agents

Precompile ABIs, code examples, and encoding for Persistent and Sovereign agents.

The Sovereign Agent Loop

This is the contract from "How They Stay Alive." It wakes itself up via the Scheduler, invokes a CLI harness in a TEE, processes the result, and schedules its next wakeup. The contract IS the agent.

Solidity / Sovereign Agent Loop
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";
import {IScheduler} from "./interfaces/IScheduler.sol";

contract AutonomousAgent is PrecompileConsumer {
    IScheduler public scheduler;
    uint256 public callId;
    uint256 public wakeCount;
    uint32 public nextWakeDelay = 50; // blocks between wakeups
    bool public isRunning;

    // 1. Owner starts the loop
    function start(uint32 initialDelay) external {
        isRunning = true;
        callId = _scheduleNext(initialDelay);
    }

    // 2. Scheduler fires this at the scheduled block
    function wakeUp(uint256 executionIndex) external {
        require(msg.sender == address(scheduler));
        if (!isRunning) return;
        wakeCount++;
        _callCLIAgent();                    // invoke 0x080C
        callId = _scheduleNext(nextWakeDelay); // schedule next wakeup
    }

    // 3. Phase 2 callback with agent output
    function onSovereignAgentResult(bytes32 jobId, bytes calldata result) external {
        require(msg.sender == ASYNC_DELIVERY);
        // result contains text, artifacts, updated convo history
        // process it, write state, act on it
    }

    function _callCLIAgent() internal {
        _executePrecompile(SOVEREIGN_AGENT_PRECOMPILE, agentInput);
    }

    function _scheduleNext(uint32 delay) internal returns (uint256) {
        return scheduler.schedule(
            abi.encodeWithSelector(this.wakeUp.selector, uint256(0)),
            800_000,                          // gas
            uint32(block.number) + delay,      // startBlock
            3,                                 // retry slots
            1,                                 // frequency
            30,                                // ttl
            20 gwei, 2 gwei, 0,            // fees
            address(this)                     // payer = self
        );
    }
}

Spawning A Persistent Agent

Persistent agents are spawned by calling the 0x0820 precompile with soul, memory, and DA references. The agent container runs in a TEE. Set restoreFromCid (field 23) to revive from a previous checkpoint instead of starting fresh.

Solidity / Persistent Agent Spawn
contract AgentSpawner is PrecompileConsumer {
    event AgentSpawned(bytes32 indexed jobId);
    event AgentResult(bytes32 indexed jobId, bytes result);

    // Spawn: input is 25-field ABI with soul, memory, DA refs
    function spawn(bytes calldata agentInput) external {
        _executePrecompile(PERSISTENT_AGENT_PRECOMPILE, agentInput);
    }

    // Revive: same call but restoreFromCid is non-empty,
    // encryptedSecrets is empty (recovered from DKMS escrow)
    function revive(bytes calldata reviveInput) external {
        _executePrecompile(PERSISTENT_AGENT_PRECOMPILE, reviveInput);
    }

    // Phase 2 callback from AsyncDelivery
    function onPersistentAgentResult(
        bytes32 jobId, bytes calldata result
    ) external {
        require(msg.sender == ASYNC_DELIVERY);
        emit AgentResult(jobId, result);
    }
}

Encode The Request

// Sovereign Agent: 23-field encoding
// Key fields: cliType (11), prompt (12), tools (19)
// Encoding is typically done off-chain and passed as bytes calldata

// Persistent Agent: 25-field encoding
// Key fields: daConfig (15), soulRef (16), memoryRef (19), restoreFromCid (23)
// For revival: set restoreFromCid to the manifest CID, leave encryptedSecrets empty
// Sovereign Agent encoding (23 fields)
const encoded = encodeAbiParameters(
  parseAbiParameters("address, uint256, bytes, uint64, uint64, string, address, bytes4, uint256, uint256, uint256, uint16, string, bytes, (string,string,string), (string,string,string), (string,string,string)[], (string,string,string), string, string[], uint16, uint32, string"),
  [
    executorAddress,       // 0: executor
    30n, "0x",            // 1-2: ttl, userPublicKey
    10n, 200n, "",        // 3-5: polling config
    callbackAddr, selector, gasLimit, maxFee, maxPriority, // 6-10: delivery
    0,                     // 11: cliType (0=Claude Code)
    "Analyze market data and suggest trades", // 12: prompt
    encryptedSecrets,      // 13: ECIES-encrypted API keys
    convoHistory, output, skills, systemPrompt, // 14-17
    model, tools, maxTurns, maxTokens, rpcUrls, // 18-22
  ]
);
from ritual_common.persistent_agent.request import PersistentAgentRequest
from ritual_common.sovereign_agent.request import SovereignAgentRequest
from ritual_common.sovereign_agent.request import StorageRef

# Persistent Agent (fresh spawn)
request = PersistentAgentRequest(
    executor=executor_address,
    provider=0,  # anthropic
    model="claude-3-5-sonnet",
    da_config=StorageRef("gcs", "agents/my-agent", "GCS_CREDS"),
    soul_ref=StorageRef("gcs", "agents/SOUL.md", "GCS_CREDS"),
    memory_ref=StorageRef("gcs", "agents/MEMORY.md", "GCS_CREDS"),
    restore_from_cid="",  # empty = fresh spawn
)

# Persistent Agent (revival from checkpoint)
revival = PersistentAgentRequest(
    executor=executor_address,
    encrypted_secrets=[],   # empty = recovered from DKMS escrow
    restore_from_cid="bafybeig...",  # manifest CID from heartbeat
)

# Sovereign Agent
request = SovereignAgentRequest(
    executor=executor_address,
    agent_type=0,  # Claude Code
    prompt="Analyze market data and suggest trades",
)
encoded = request.to_web3()

Persistent Agent 25-Field ABI

#FieldTypeDescription
0–4Base executor fields (executor, encryptedSecrets, ttl, secretSignatures, userPublicKey)
5maxSpawnBlockuint64Phase 2 deadline offset
6–11Delivery config (target, selector, gasLimit, maxFeePerGas, maxPriorityFeePerGas, value)
12provideruint80=anthropic, 1=openai, 2=gemini, 3=xai, 4=openrouter
13modelstringLLM model name
14llmApiKeyRefstringSecret name for API key
15daConfig(string,string,string)StorageRef for DA layer
16–22StorageRefs: soulRef, agentsRef, userRef, memoryRef, identityRef, toolsRef, openclawConfigRef
23restoreFromCidstringCID for revival (empty = fresh spawn)
24rpcUrlsstringRPC URLs for agent
DKMS child address: Before spawning a persistent agent, fund its DKMS-derived child address via RitualWallet.depositFor(). The agent needs RitualWallet balance to pay for its own precompile calls.

Classical Models

Your contract can run ML models synchronously. The precompile takes a RitualTensor and a Hugging Face model ID; the result comes back in the same call frame.

The ONNX precompile at 0x0800 runs inference inline in the node's native runtime, with the same execution surface as a built-in like ecrecover. Models load from Hugging Face using the format hf/owner/repo/file.onnx@commit.

In Practice

ONNX is synchronous: encode the 7-field input and call 0x0800 directly. The model ID must use hf/owner/repo/file.onnx@<40-char-commit-hash>, and branch names are rejected so the model lineage stays reproducible.

Solidity / ONNX Consumer
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

contract Classifier {
    address constant ONNX = address(0x0800);

    function classify(bytes calldata tensorBytes) external view returns (bytes memory) {
        (bool ok, bytes memory result) = ONNX.staticcall(
            abi.encode(
                bytes("hf/owner/repo/model.onnx@abc123..."),
                tensorBytes,
                uint8(2),  // inputArithmetic: 2=IEEE754
                uint8(0),  // inputFixedPointScale
                uint8(2),  // outputArithmetic
                uint8(0),  // outputFixedPointScale
                uint8(1)   // rounding: 1=half-even
            )
        );
        require(ok, "ONNX inference failed");
        return result;
        // result: (bytes tensor, uint8 arithmetic, uint8 scale, uint8 rounding)
    }
}

Encode The Request

bytes memory input = abi.encode(
    bytes("hf/owner/repo/model.onnx@abc123..."), // bytes: model ID (UTF-8)
    tensorBytes,         // bytes: RitualTensor (uint8 dtype, uint16[] shape, int32[] values)
    uint8(2),            // uint8: inputArithmetic (1=fixed-point, 2=IEEE754)
    uint8(0),            // uint8: inputFixedPointScale
    uint8(2),            // uint8: outputArithmetic
    uint8(0),            // uint8: outputFixedPointScale
    uint8(1)             // uint8: rounding (1=half-even, 2=truncate, 3=floor, 4=ceil)
);
import { encodeAbiParameters, parseAbiParameters, toHex } from "viem";

const encoded = encodeAbiParameters(
  parseAbiParameters("bytes, bytes, uint8, uint8, uint8, uint8, uint8"),
  [
    toHex("hf/owner/repo/model.onnx@abc123..."), // model ID as bytes
    tensorHex,   // pre-encoded RitualTensor
    2,           // inputArithmetic: IEEE754
    0, 2, 0,    // fixedPointScale, outputArith, outputScale
    1,           // rounding: half-even
  ]
);
from ritual_common.onnx.request import ONNXInferenceRequest
from ritual_common.shared_types import RitualTensor, ArithmeticType, Rounding
from ritual_common.models.types.ml_model_id import MlModelId

request = ONNXInferenceRequest(
    ml_model=MlModelId.from_unique_id("hf/owner/repo/model.onnx@abc123..."),
    tensor=RitualTensor.from_numpy(input_array),
    input_arithmetic=ArithmeticType.IEEE754,
    input_fixed_point_scale=0,
    output_arithmetic=ArithmeticType.IEEE754,
    output_fixed_point_scale=0,
    rounding=Rounding.HALF_EVEN,
)
encoded = request.to_web3()

7-Field ABI Reference

#FieldTypeDescription
0mlModelIdbytesUTF-8 encoded model ID (hf/owner/repo/file.onnx@commit)
1tensorDatabytesRitualTensor: (uint8 dtype, uint16[] shape, int32[] values)
2inputArithmeticuint81=fixed-point, 2=IEEE 754
3inputFixedPointScaleuint8Scale for fixed-point input
4outputArithmeticuint81=fixed-point, 2=IEEE 754
5outputFixedPointScaleuint8Scale for fixed-point output
6roundinguint81=half-even, 2=truncate, 3=floor, 4=ceil

Response Format

(bytes tensorEncoded, uint8 outputArithmetic, uint8 outputScale, uint8 rounding)

Passkeys & Auth

Users sign transactions with Face ID, fingerprint, or a security key. No seed phrase, no browser extension.

TxPasskey (0x77) is a native transaction type. The chain understands WebAuthn natively. The SECP256R1 precompile at 0x0100 lets your contract verify P-256 signatures over arbitrary data. Together they collapse the entire wallet UX problem: the user's phone is the wallet.

Passkey Authentication Flow
USER dApp Frontend authenticated ✓ P-256 sign TxPasskey · 0x77 RITUAL CHAIN YourContract.sol verifies via SECP256R1 (0x0100) P-256 signature verified ✓ No private key. No seed phrase. No MetaMask.

Address Derivation

The address comes from keccak256(publicKeyX || publicKeyY)[12:32], the last 20 bytes of the hash of the concatenated P-256 coordinates. Same passkey, same address, every time. Deterministic.

Signature Types

CodeTypeGas Overhead
0x00Secp256k1 (standard ECDSA)
0x01P-256 (raw passkey)+3,450
0x02WebAuthn (P-256 + challenge parsing)+5,000
Solidity / P-256 Signature Verification
address constant SECP256R1 = address(0x0100);

// Input: (bytes pubkey, bytes message, bytes signature)
// pubkey: 65 bytes (0x04 || x || y), signature: 64 bytes (r || s)
(bool ok, bytes memory result) = SECP256R1.staticcall(
    abi.encode(pubkeyBytes, messageBytes, signatureBytes)
);
require(ok, "verification failed");
// Returns uint256: 1 = valid, 0 = invalid (NOT bool)
uint256 valid = abi.decode(result, (uint256));
require(valid == 1, "invalid signature");
Return type: The SECP256R1 precompile returns uint256 (1 = valid, 0 = invalid), not bool. Decode as uint256 and compare to 1. Gas cost: 3,450 (flat).

Multimodal Processing

Your contract can request images, audio, or video from generative models. The asset bytes come back via a two-phase async callback once the executor finishes.

Image (0x0818), Audio (0x0819), Video (0x081A). Generation runs inside a TEE. Your contract submits the request, and the result (a content URI with metadata) is delivered to your callback handler when generation completes.

18-Field ABI

All three share the same ABI layout (18 fields): base executor fields (0–4), polling + delivery config (5–13), model (14), inputs as ModalInput[] (15), output config (16), and encrypted storage payment (17). Two-phase async with result delivered via LongRunningResultDelivered callback.

ModalInput Tuple

(uint8 inputType, bytes data, string uri, bytes32 contentHash, uint32 param1, uint32 param2, bool encrypted)

Input types: 0=TEXT, 1=IMAGE, 2=AUDIO, 3=VIDEO.

OutputConfig Tuple

(uint8 outputType, uint32 maxWidth, uint32 maxHeight, uint32 maxParam3, bool encryptOutput, uint16 numInferenceSteps, uint16 guidanceScaleX100, uint32 seed, uint8 fps, string negativePrompt)

Solidity / Image Generation Consumer
function generateImage(bytes calldata imageInput) external {
    _executePrecompile(IMAGE_CALL_PRECOMPILE, imageInput);
}

// Phase 2 callback from AsyncDelivery
function onLongRunningResult(
    bytes32 jobId, bytes calldata result
) external {
    require(msg.sender == ASYNC_DELIVERY, "unauthorized");
    // result: (bool hasError, bytes completionData, string outputUri,
    //          bytes32 outputContentHash, bool outputEncrypted,
    //          uint32 outputSizeBytes, uint32 outputWidth, uint32 outputHeight,
    //          string errorMessage)
}

Encode The Request

// 18-field encoding done off-chain, passed as bytes calldata
function generateImage(bytes calldata input) external {
    _executePrecompile(IMAGE_CALL_PRECOMPILE, input);
}
// 18 fields: base executor (5) + polling/delivery (9)
// + model + ModalInput[] + OutputConfig + encryptedStoragePayment
const encoded = encodeAbiParameters(imageParams, [
  executorAddress, [], 30n, [], "0x",
  pollInterval, maxPollBlock, taskIdMarker,
  callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
  "dall-e-3",                        // model
  [[0, textBytes, "", "0x"..., 0, 0, false]], // ModalInput[]
  [1, 1024, 1024, 0, false, 50, 750, 0, 0, ""], // OutputConfig
  "0x",                               // encryptedStoragePayment
]);
from ritual_common.image_call.request import ImageCallRequest

request = ImageCallRequest(
    executor=executor_address,
    model="dall-e-3",
    inputs=[ModalInput(input_type=0, data=prompt_bytes)],
    output=OutputConfig(max_width=1024, max_height=1024),
)
encoded = request.to_web3()

Audio & Video

Audio (0x0819) and Video (0x081A) use the same 18-field ABI and callback interface. The OutputConfig tuple's fields adapt to each modality (e.g. fps for video, sample rate for audio).

Long-Running Tasks

Your contract can issue HTTP calls that take minutes or hours. Use it for batch jobs, webhook waits, and other long-poll patterns that exceed the short-running 2s budget.

Use 0x0805 when the standard HTTP precompile (0x0801) is too fast to wait for. Your contract submits the request, the executor polls the external API at the interval you set, and delivers the final result to your callback handler when complete. Supports three JQ extraction paths for task ID, status checking, and result parsing.

Submit–Poll–Deliver

The executor makes the initial request, extracts a task ID via taskIdJsonPath, then polls at pollIntervalBlocks until statusJsonPath evaluates truthy. Once complete, it extracts the final result via resultJsonPath and delivers via AsyncDelivery callback.

Solidity / Long-Running HTTP Consumer
function submitLongRunningJob(bytes calldata longHttpInput) external {
    _executePrecompile(LONG_HTTP_PRECOMPILE, longHttpInput);
}

// Phase 2 callback — selector: 0x6dc9dbef
function onLongRunningResult(
    bytes32 jobId, bytes calldata result
) external {
    require(msg.sender == ASYNC_DELIVERY, "unauthorized");
    // process result
}

Encode The Request

// 35-field encoding done off-chain, passed as bytes calldata
function submitLongRunningJob(bytes calldata input) external {
    _executePrecompile(LONG_HTTP_PRECOMPILE, input);
}
const encoded = encodeAbiParameters(longHttpParams, [
  executorAddress, [], 30n, [], "0x",
  10n, 200n, "",            // polling config
  callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
  url, 2, [], [], body,      // initial HTTP (POST)
  ".task_id",                // taskIdJsonPath
  pollUrl, 1, [], [], "0x", // poll HTTP (GET)
  ".status == \"complete\"", // statusJsonPath
  resultUrl, 1, [], [], "0x", // result HTTP (GET)
  ".result",                // resultJsonPath
  0n, 0, false,             // dkms, pii
]);
from ritual_common.long_running_http_call.request import LongRunningHTTPCallRequest

request = LongRunningHTTPCallRequest(
    executor=executor_address,
    poll_interval_blocks=10,
    max_poll_block=current_block + 200,
    url="https://api.example.com/submit",
    method=HTTPMethod.POST,
    task_id_json_path=".task_id",
    status_json_path=".status == \"complete\"",
    result_json_path=".result",
)
encoded = request.to_web3()

35-Field ABI Reference

#FieldTypeDescription
0–4Base executor fields
5pollIntervalBlocksuint64Blocks between polls
6maxPollBlockuint64Deadline for polling
7taskIdMarkerstringMarker for task ID extraction
8–13Delivery config (target, selector, gasLimit, maxFeePerGas, maxPriorityFeePerGas, value)
14–18Initial HTTP: url, method, headersKeys, headersValues, body
19taskIdJsonPathstringJQ path to extract task ID from initial response
20–24Poll HTTP: pollUrl, pollMethod, pollHeadersKeys, pollHeadersValues, pollBody
25statusJsonPathstringJQ path for completion check (truthy when done)
26–30Result HTTP: resultUrl, resultMethod, resultHeadersKeys, resultHeadersValues, resultBody
31resultJsonPathstringJQ path to extract final result
32dkmsKeyIndexuint256DKMS key index
33dkmsKeyFormatuint8DKMS key format
34piiEnabledboolPII redaction
One call per transaction. Like all long-running precompiles, only one 0x0805 call per async transaction. Phase 1 returns a task ID; Phase 2 delivers via onLongRunningResult(bytes32, bytes) callback from AsyncDelivery.

Consumer Patterns

Three Solidity patterns, one per execution model. Pick the one that matches your precompile.

Synchronous Consumer

Simplest case. Call the precompile, read the return value. Done.

Solidity / Sync Consumer
contract SyncConsumer {
    function verify(bytes calldata pubkey, bytes calldata message, bytes calldata sig) external view {
        (bool ok, bytes memory result) = address(0x0100).staticcall(
            abi.encode(pubkey, message, sig)
        );
        uint256 valid = abi.decode(result, (uint256));
        require(ok && valid == 1, "invalid signature");
    }
}

Short-Running Async Consumer

The SPC result is available to your contract during execution. Use _executePrecompile() from PrecompileConsumer. It calls the precompile, unwraps the async envelope (simmedInput, actualOutput), and returns the decoded output bytes directly. Your contract can decode the response and write state in the same transaction.

Solidity / Short-Running Async Consumer
import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";

contract HTTPConsumer is PrecompileConsumer {
    uint256 public latestPrice;

    function fetchPrice(bytes calldata httpInput) external {
        bytes memory output = _executePrecompile(HTTP_CALL_PRECOMPILE, httpInput);
        // output is the decoded HTTP response — available right here
        (uint16 status, , , bytes memory body, ) =
            abi.decode(output, (uint16, string[], string[], bytes, string));
        require(status == 200);
        // parse body, write state — all on-chain, same transaction
    }
}

Two-Phase Consumer

The pattern is two transactions deep: the request goes out in one, the result arrives in a callback later. The critical part: verify msg.sender == ASYNC_DELIVERY in your callback. Skip this check and anyone can call your callback with garbage data.

Solidity / Two-Phase Consumer
import {PrecompileConsumer} from "./utils/PrecompileConsumer.sol";

contract AgentConsumer is PrecompileConsumer {
    function submitAgentTask(bytes calldata agentInput) external {
        _executePrecompile(PERSISTENT_AGENT_PRECOMPILE, agentInput);
    }

    function onPersistentAgentResult(bytes32 jobId, bytes calldata result) external {
        require(msg.sender == ASYNC_DELIVERY, "unauthorized");
        // Process the agent's response
    }
}

RitualWallet

Deposit RITUAL to pay for precompile calls. Balance is locked while async jobs are pending.

Precompile calls cost fees. Deposit RITUAL into RitualWallet at 0x532F…3948 and the chain deducts as you go. If you have a pending async job, your deposit is locked until it settles. Fund before you submit. The fee is locked at submission time.

In Practice

Solidity / Depositing and Checking Balance
interface IRitualWallet {
    function deposit(uint256 lockDuration) external payable;
    function depositFor(address user, uint256 lockDuration) external payable;
    function withdraw(uint256 amount) external;
    function balanceOf(address) external view returns (uint256);
    function lockUntil(address) external view returns (uint256);
}

IRitualWallet wallet = IRitualWallet(0x532F0dF0896F353d8C3DD8cc134e8129DA2a3948);

// Deposit 0.01 RITUAL with 100-block lock
wallet.deposit{value: 0.01 ether}(100);

// Fund another address (e.g. an agent)
wallet.depositFor{value: 0.05 ether}(agentAddress, 200);

// Withdraw after lock expires
wallet.withdraw(0.01 ether);

Reference

FunctionDescription
deposit(uint256 lockDuration)Deposit RITUAL with lock period (blocks)
depositFor(address user, uint256 lockDuration)Deposit on behalf of another address
withdraw(uint256 amount)Withdraw after lock expires
balanceOf(address)Check available balance
lockUntil(address)Check when the lock expires (0 = no lock)

Scheduler

Your contract can schedule its own execution at future blocks: recurring, delayed, or conditional on a predicate. The block proposer invokes it directly, with no off-chain keeper or cron service required.

The Scheduler is a system contract at 0x56e776BAE2DD60664b69Bd5F865F1180ffB7D58B. Your contract calls schedule() and the chain fires the callback automatically at the blocks you specify. Combine with predicates for conditional execution: only fire when a condition is true. Fees are taken at execution time from RitualWallet.

Scheduler — Recurring Execution
B L O C K S #N-3 #N-2 #N-1 #N #N+1 #N+2 #N+3 #N+4 #N+5 #N+6 #N+7 #N+8 #N+9 #N+10 #N+11 #N+12 #N+13 #N+14 #N+15 #N+16 #N+17 #N+18 #N+19 USER RITUAL CHAIN ConsumerContract.sol calls schedule() freq: 4 · numCalls: 4 callback executed ✓ pays from RitualWallet schedule() callback Scheduler.sol TxScheduled · 0x10 · system sender 0xfa7e job-1 · priceCheck · freq:50 · block #120 job-2 · agentWakeUp · freq:10 · block #80 job-3 · consumer · freq:4 · block #N 4/4 3/4 2/4 1/4 done ✓ User Tx TxScheduled (0x10) No keepers. No cron jobs. The chain fires it.

In Practice

Solidity / Scheduling a Recurring Price Check
import {IScheduler} from "./interfaces/IScheduler.sol";

IScheduler constant SCHEDULER =
  IScheduler(0x56e776BAE2DD60664b69Bd5F865F1180ffB7D58B);

function schedulePriceCheck() external {
    bytes memory callData = abi.encodeWithSelector(
        this.executePriceCheck.selector,
        uint256(0)  // placeholder: overwritten with executionIndex
    );
    // schedule(data, gas, startBlock, numCalls, frequency, ttl, maxFeePerGas, maxPriorityFeePerGas, value, payer)
    SCHEDULER.schedule(
        callData,
        500000,              // gas limit per execution
        uint32(block.number + 10), // startBlock
        24,                  // numCalls (24 executions)
        50,                  // frequency (every 50 blocks)
        30,                  // ttl (max blocks to wait for execution)
        block.basefee,       // maxFeePerGas
        0,                   // maxPriorityFeePerGas
        0,                   // value
        address(this)       // payer (RitualWallet balance)
    );
}

// Called by Scheduler — msg.sender is Scheduler, tx.origin is 0xfa7e
function executePriceCheck(uint256 executionIndex) external {
    // executionIndex: which execution this is (0, 1, 2, ...)
}

Schedule() API

ParameterTypeDescription
databytesCalldata for the callback (bytes 4-35 overwritten with executionIndex)
gasuint32Gas limit per execution
startBlockuint32First execution block
numCallsuint32Total number of executions
frequencyuint32Blocks between executions
ttluint32Max blocks to wait (max 500)
maxFeePerGasuint256EIP-1559 max fee
maxPriorityFeePerGasuint256EIP-1559 priority fee
valueuint256RITUAL value to send
payeraddressAddress paying from RitualWallet

Before scheduling, the contract must call approveScheduler(schedulerAddress) to authorize the Scheduler to call it back.

Predicates

A predicate is a contract the scheduler calls before each execution. Implement IScheduledPredicate. The scheduler calls shouldExecute via staticcall and skips the execution if it returns false. Set frequency=1 with a predicate to check every block.

Solidity / Scheduler Predicate
interface IScheduledPredicate {
    function shouldExecute(
        address caller,
        uint256 callId,
        uint256 executionIndex
    ) external view returns (bool);
}

100,000 gas limit per predicate call. staticcall only, no state writes. Reverts treated as false. executionIndex counts actual executions, not blocks evaluated.

Async Scheduling: TTL Rules

Short path (SPC)Long path (two-phase)
PrecompilesHTTP, LLM, DKMSAgent, Long HTTP, Image, ZK
TTL coversFull async lifecyclePhase 1 only
Phase 2N/Amax_poll_block, independent of TTL
On expiryCallSkippedTTLExpiredPhase 1 skipped

Short path rule: scheduler_ttl >= max_expected_drift + max_expected_settlement_blocks. If drift is ~3 blocks and settlement takes ~5, set TTL to at least 8.

Execution Index Encoding

The scheduler writes executionIndex into bytes 4-35 of your calldata before calling the target. Use 0 as a placeholder when encoding:

Solidity / Execution Index Placeholder
bytes memory callData = abi.encodeWithSelector(
    MyContract.myFunction.selector,
    uint256(0),  // placeholder: overwritten with executionIndex
    otherArg1,
    otherArg2
);
Contracts only. EOAs cannot call schedule(). Scheduled txs bypass the sender lock. Multiple scheduled async jobs run in parallel from the same contract.

Secrets & ECIES

How to pass API keys and credentials to precompiles without putting them on-chain.

Your HTTP calls need API keys. Your LLM calls need provider tokens. You can't put these on-chain. They'd be visible to everyone. The Secrets system encrypts them with the TEE executor's public key. Only the enclave can decrypt.

Secrets Encryption Flow
RITUAL CHAIN dApp Frontend plaintext API_KEY=sk-a8f3b… encrypting... AP█_K3Y=sk-▒8f3b… encrypting... ▓P░_K█Y=░k-a█f▒b… encrypting... █▓░_░█▒=▓░-█▒▓░█… encrypting... ░█▓▒█░▓█▒░▓█▒░▓█ encrypting... ▓░█▒▓█░▒█▓░▒█▓░█ ECIES encrypted 🔒 0x04a1b2c3d4e5f6… executor pubkey from TEEServiceRegistry VALIDATORS YourContract.sol ░█▓▒█░▓█▒░▓█▒░▓█ API_KEY · encrypted chain never sees plaintext TRUSTED EXECUTION ENV Executor ░█▓▒█░▓█▒░▓█▒░▓█ decrypting... A░▓▒█░▓█▒░▓█▒░▓█ decrypting... AP█_K░Y=░▓▒█░▓▒█ decrypting... API_KE▓=sk-▒8f▒█ decrypting... API_KEY=sk-a▒f3b… decrypting... API_KEY=sk-a8f3b… decrypted ✓ Plaintext never touches the chain or the mempool

Template Substitution

Reference your encrypted secret in request fields as {{SECRET_NAME}}. The TEE executor decrypts and substitutes before making the request. The plaintext never hits the chain or the mempool.

ECIES Encryption: Full Example

TypeScript / Encrypting a Secret with eciesjs
import { encrypt } from "eciesjs";
import { readContract } from "viem";

// 1. Get executor's public key from TEEServiceRegistry
const executorPubKey = await readContract(client, {
  address: "0x9644e8562cE0Fe12b4deeC4163c064A8862Bf47F",
  abi: teeRegistryAbi,
  functionName: "getExecutorPublicKey",
  args: [executorId],
});

// 2. Encrypt the secret (nonce MUST be 12 bytes)
const apiKey = "sk-proj-abc123...";
const encrypted = encrypt(
  executorPubKey,
  Buffer.from(apiKey, "utf-8")
);

// 3. Store encrypted secret and reference via {{API_KEY}} in request
const httpRequest = {
  url: "https://api.openai.com/v1/chat/completions",
  headerKeys: ["Authorization"],
  headerValues: ["Bearer {{API_KEY}}"],
};
Python / Encrypting with eciespy
from ecies import encrypt
import os

executor_pubkey = get_executor_pubkey(executor_id)
plaintext = b"sk-proj-abc123..."
ciphertext = encrypt(executor_pubkey, plaintext)

PII Mode

piiEnabled is a boolean field on all async precompile requests: HTTP, LLM, Long HTTP, Agent, Multimodal. One flag, two effects.

piiEnabled = true: {{SECRET_NAME}} templates are resolved from encryptedSecrets before the request is sent. PII is redacted from results before on-chain settlement.

piiEnabled = false: no substitution, no redaction. {{SECRET_NAME}} literals are sent as-is to external APIs. Raw results go on-chain.

Common bug: if your request contains {{SECRET_NAME}} templates but piiEnabled is false, the literal string {{SECRET_NAME}} is sent to the API. Templates are not resolved. Rule: any {{SECRET}} template means piiEnabled = true.

LLM PII Requirements

LLM PII mode requires all three: piiEnabled = true, non-empty encryptedSecrets, and a 65-byte userPublicKey with 0x04 uncompressed EC prefix. Missing any one silently disables PII. Also: PII mode and streaming are mutually exclusive on LLM. Pick one.

Deployment

Copy-paste configs to get connected. Viem, wagmi, Foundry, Hardhat, and the testnet faucet.

Viem Chain Definition

TypeScript / Chain Config
import { defineChain } from "viem";

export const ritualChain = defineChain({
  id: 1979,
  name: "Ritual Chain",
  nativeCurrency: { name: "RITUAL", symbol: "RITUAL", decimals: 18 },
  rpcUrls: {
    default: { http: ["https://rpc.ritualfoundation.org"] },
  },
  blockExplorers: {
    default: { name: "Explorer", url: "https://explorer.ritualfoundation.org" },
  },
});

Wagmi Config

TypeScript / Wagmi Setup
import { createConfig, http } from "wagmi";

export const config = createConfig({
  chains: [ritualChain],
  transports: {
    [ritualChain.id]: http(),
  },
});

Foundry

# foundry.toml
[profile.default]
src = "src"
out = "out"
evm_version = "shanghai"

[rpc_endpoints]
ritual = "https://rpc.ritualfoundation.org"

Hardhat

TypeScript / hardhat.config.ts
import { HardhatUserConfig } from "hardhat/config";
import "@nomicfoundation/hardhat-toolbox";

const config: HardhatUserConfig = {
  solidity: "0.8.24",
  networks: {
    ritual: {
      url: "https://rpc.ritualfoundation.org",
      chainId: 1979,
      accounts: [process.env.PRIVATE_KEY!],
    },
  },
};
export default config;

Testnet Faucet

You need testnet RITUAL to call precompiles and deploy contracts.

Faucet URL: https://faucet.ritualfoundation.org. Connect your wallet and request testnet RITUAL.

Testing

How to test against precompiles that don't exist locally. Mock strategies for each layer.

Foundry Unit Tests

Sync precompiles work with normal Foundry tests. Call and assert. Async is trickier. Use vm.mockCall to fake precompile responses and vm.prank(ASYNC_DELIVERY) to simulate the executor calling your callback.

Solidity / Mocking Async Delivery
function testCallback() public {
    bytes memory mockResult = abi.encode("agent response");

    // Simulate AsyncDelivery calling our contract
    vm.prank(0x5A16214fF555848411544b005f7Ac063742f39F6);
    consumer.onResult(mockResult);

    assertEq(consumer.lastResult(), "agent response");
}

Frontend Testing

For the frontend: Vitest for hook unit tests with mocked chain clients, Playwright for E2E against a testnet fork. The async flow is hard to test locally because you need a real executor to exercise the full path.

Glossary

Every term, acronym, and key concept used across these docs. Alphabetical.

TermDefinition
ACE (Application-Controlled Execution)Symphony's general framework for application-defined transaction ordering. Contracts specify ordering policies over call sequences with tiebreakers. Sequencing Rights is a restricted single-contract subset. Sequencing Rights
Agent CallStateless, one-shot agent precompile (two-phase async). Submit a task, receive a result via callback. Autonomous Agents
AsyncDeliverySystem contract that delivers two-phase async results by calling back into consumer contracts. Callbacks must verify msg.sender == ASYNC_DELIVERY. System Contracts
AsyncJobTrackerSystem contract that tracks the 9-state lifecycle of every async job and enforces the sender lock. Emits JobAdded, Phase1Settled, ResultDelivered, JobRemoved on state transitions. Async Lifecycle
CKKSHomomorphic encryption scheme for approximate arithmetic on encrypted floating-point tensors. Used by the FHE precompile to run inference on ciphertext. FHE Inference
DKMSDecentralized Key Management System. Async SPC that derives deterministic secp256k1 keypairs inside TEE. Same owner + same keyIndex = same keypair every time. DKMS Keys
ECIESElliptic Curve Integrated Encryption Scheme. Asymmetric encryption used to encrypt secrets, agent inputs, and credentials to an executor's or DKMS-derived public key. Libraries: eciesjs (JS), eciespy (Python). Secrets & ECIES
Delegated executionExecution path for non-deterministic or resource-intensive workloads (LLM, HTTP, agents). Runs once inside a TEE, result verified rather than replicated. Contrasted with replicated execution (standard EVM path). Superposition
EnshrinedImplemented at the protocol layer of the chain, not via external smart contracts or oracles. Applies to precompiles, TxPasskey, Sequencing Rights, and the Scheduler.
Execution-cost inversionWhen execution latency exceeds consensus latency by 3-6 orders of magnitude (ML inference, Monte Carlo). The standard blockchain assumption that execution is cheap relative to agreement no longer holds. Non-Deterministic Execution
ExecutorTEE-attested node that processes off-chain precompile requests. Registered in TEEServiceRegistry with capabilities, attestation proof, and a public key for ECIES encryption.
JQSynchronous precompile that evaluates jq expressions against JSON strings. String output requires _decodeJQString() for double-indirection decoding. HTTP Precompile
Persistent AgentStateful agent precompile with identity, memory, and data availability references. Persists across sessions via StorageRef. Revival from CID restores full state. Autonomous Agents
PII ModeBoolean flag (piiEnabled) on all async precompile requests controlling secret template substitution and PII redaction from on-chain results. Any {{SECRET_NAME}} template requires piiEnabled = true. Secrets & ECIES
PredicateContract implementing IScheduledPredicate. The Scheduler calls shouldExecute via staticcall (100k gas limit) before each scheduled execution; returns false to skip. Scheduler
receipt.spcCallsExtension field on Ritual Chain transaction receipts. receipt.spcCalls[0].output contains the ABI-encoded result from short-running async precompile calls. For long-running precompiles, the final result is delivered via AsyncDelivery callback, not spcCalls. Execution Models
RitualTensorABI-encoded tensor format for the ONNX precompile: uint256 shape array, dtype enum, flattened values. Classical Models
RitualWalletSystem contract for prepaid fee escrow. Deposit RITUAL; the chain deducts per precompile call. Two-phase fees use EOA balance, not contract balance. RitualWallet
Sender lockAsyncJobTracker constraint: one pending async job per EOA at a time. A second submission before the first settles reverts. Scheduled txs bypass this. Async Lifecycle
Sequencing RightsProtocol-level rule where contracts declare function priority via sequencingRights() and the block builder orders transactions accordingly. Invalid ordering = invalid block. Sequencing Rights
Seven PropertiesSeven requirements for a fully autonomous agent: Immortal, Emancipated, Teleportable, Financially sovereign, Web2-interoperable, Private, Computationally sovereign. Autonomous Agents
Sovereign AgentCLI-style coding agent precompile running inside a TEE. Supports Claude Code, OpenClaw, ZeroClaw, Hermes, Codex, Aider. Inputs encrypted with ECIES. Autonomous Agents
SuperpositionRitual Chain running replicated (deterministic EVM) and delegated (TEE) execution over the same shared state, chosen per-transaction by workload type. Symphony paper terminology for the dual-path architecture. Superposition
SPCStateful PreCompile. Short-running async execution model where the result is returned to your contract via _executePrecompile() in the same transaction. One SPC call per transaction. Execution Models
StorageRefOpaque identifier returned by the Persistent Agent after each invocation. Pass it back on the next call to resume context (HuggingFace, GCS, Pinata, or inline). Autonomous Agents
TEETrusted Execution Environment. Hardware-isolated enclave where executors run off-chain computation. Attestation proves honest execution and binds results to the originating request.
TEE-EOVMTTrusted Execution Environment, EVM with Off-chain Verifiable Machine Tasks. Ritual Chain's architecture: the EVM delegates non-EVM-native computation to TEE executors, with results cryptographically bound to requests. Superposition
TxPasskeyNative transaction type 0x77. Users sign with biometrics (Face ID, fingerprint) or a security key via WebAuthn instead of a secp256k1 private key. Passkeys & Auth
TOCTOU (Time-of-check to time-of-use)State drift risk between async commit and settle. Other transactions can change the state your callback depends on during the gap. No cross-async locks exist. Application responsibility. Execution Models
Two-phase asyncExecution model for long-running operations. Phase 1 mines immediately (returns task ID). Phase 2: AsyncDelivery calls back into the consumer contract with the result, in a separate transaction. Execution Models
X402Encrypted credential injection protocol for pay-per-call API access. Runs on the HTTP precompile, not a separate address. Credentials encrypted with ECIES, substituted via {{SECRET_NAME}} inside TEE. X402 Payments

FAQ

Why autonomous agents are coming, and why they need a chain built for them rather than retrofitted around them.

1

Why can't I just run an AI agent on any EVM chain with an off-chain bot?

Nothing stops you, but nobody can verify it. An off-chain bot that claims it ran a model and got a particular result is indistinguishable from a bot that fabricated the result. On Ritual, agent execution runs inside TEE enclaves. The executor's attestation is registered on-chain via TEEServiceRegistry and validated before the builder accepts results. Inputs are ECIES-encrypted to the executor's public key, so only the enclave can decrypt and execute. The result is bound to the request at the hardware level, not by social trust in a bot operator.

2

What does "enshrined" mean and why does it matter for agents?

Smart contracts cannot enforce transaction-level constraints on themselves. Enshrined means the agent infrastructure is part of the chain's execution layer: the block builder enforces the sender lock, the async lifecycle is tracked via transaction types (TxAsyncCommitment, TxAsyncSettlement), and the Scheduler is a system contract invoked by the block proposer. You cannot replicate these behaviors on a vanilla EVM chain because they require modifications to the block builder, the transaction pool, and the consensus rules. A Solidity library can approximate the API. It cannot approximate the enforcement.

3

How does Ritual prevent an executor from lying about what the LLM actually said?

Trusted Execution Environments (TEEs). The executor runs the model inside a hardware enclave that produces attestation evidence of what workload ran. This attestation is registered on-chain: TEEServiceRegistry stores the executor's public key, attestation hash, and capability set. The builder only accepts results from registered executors with valid, unexpired attestations. The executor cannot modify the output without invalidating the attestation, because the attestation covers the enclave's code and data.

4

Why do agents need their own keys? Can't they just use the deployer's wallet?

If the agent uses the deployer's key, the deployer can impersonate the agent, decrypt its stored state, or revoke its ability to transact. The agent is a puppet. DKMS derives a secp256k1 keypair inside the TEE, bound to the sender's Ethereum address. The private key never leaves the enclave. The agent's persistent state (DA content) is encrypted with this key, so no one outside the TEE can read it. The deployer funds the agent, the agent controls its own identity.

5

What happens to an agent's memory when the executor goes down?

The agent's state lives in external storage, not in the executor. StorageRef tuples point to HuggingFace, GCS, Pinata, or inline data. All DA content is encrypted with a DKMS-derived key that is bound to the sender's address, not the executor. If executor A goes down, executor B derives the same key via DKMS, downloads the encrypted state, and resumes. No migration step. The agent is portable across executors by default.

6

Why is one pending async job per wallet a feature and not a bug?

If you need N concurrent agents, use N wallets. The sender lock (enforced by AsyncJobTracker at the block builder level) prevents a single EOA from flooding the executor fleet with simultaneous commitments. The constraint is per-sender, not system-wide. For automation, the Scheduler bypasses the sender lock entirely because scheduled transactions are system transactions from a different sender (the block proposer). One wallet, one in-flight job, no exceptions except for scheduled calls.

7

How do agents pay for things without exposing credentials on-chain?

Secrets are ECIES-encrypted to the executor's public key before submission. The ciphertext is on-chain but unreadable outside the TEE. Inside the enclave, the executor decrypts the secrets JSON and replaces template placeholders ({{API_KEY}} in URLs, headers, or body fields) with the real values before execution. At no point does a plaintext credential appear on-chain, in logs, or in the transaction receipt. The encryption uses AES-256-GCM with 12-byte nonces. Getting the nonce length wrong is the single most common integration failure.

8

What stops someone from front-running an agent's transactions?

Within the async lifecycle, there is no mempool window to exploit. The builder creates TxAsyncCommitment as a system transaction and replays the original transaction with the result injected at settlement. The sender lock means the agent's EOA has exactly one pending commitment, so there is no second transaction to sandwich. For contract-level ordering beyond the async lifecycle, Sequencing Rights is a separate mechanism: a contract declares function priority, the builder orders transactions to match, and a block violating the declared ordering is invalid.

9

Can an agent call another agent?

Not in the same transaction. The one-async-precompile-per-transaction constraint is absolute. But Agent A's Phase 2 callback is a fresh transaction from AsyncDelivery, not a continuation of the original. Your callback contract can invoke Agent B's precompile in that callback. For concurrent execution, use separate wallet addresses. For sequential chaining, the Scheduler can orchestrate multi-agent workflows because it bypasses the sender lock.

10

Why build agents as precompiles instead of smart contracts?

Precompiles have access to primitives that smart contracts cannot touch. The sender lock is enforced by the block builder during transaction inclusion. A contract's require runs after inclusion, which is too late. ECIES decryption happens in the TEE, not in the EVM where all state is public. The async lifecycle is a transaction type, not an event pattern dependent on off-chain indexers. DKMS key derivation runs inside the enclave with no EVM-visible state. A smart contract can call a precompile, but it cannot replicate what the precompile does at the protocol level.

Ritual for Users

What you need to know if you are using dApps on Ritual, not building them.

Wallet Setup

Add Ritual Chain to MetaMask or any EVM wallet:

FieldValue
Network NameRitual
Chain ID1979
RPC URLhttps://rpc.ritualfoundation.org
Currency SymbolRITUAL
Explorer URLhttps://explorer.ritualfoundation.org

MetaMask: Settings → Networks → Add network → Add a network manually. Paste the values above. Or use a chain-list integration if your wallet supports adding by Chain ID.

Get Testnet Tokens

Visit faucet.ritualfoundation.org, connect your wallet or paste your address, and claim testnet RITUAL.

Ritual for Agents

AI coding agents that build dApps on Ritual without human code authorship.

What This Is

ritual-dapp-skills is a set of markdown instruction files that teach AI coding agents how to build applications on Ritual Chain. Every precompile, every contract pattern, every frontend hook, the full deployment pipeline. An agent reads the relevant skill files, asks 0-5 clarifying questions, and builds in phases: architecture, contracts, frontend, backend, testing, deployment.

Works with Claude Code (native plugin), Cursor (agent skills), Codex, OpenClaw, Hermes, and any LLM agent that reads markdown.

Agents Building Agents

An autonomous agent on Ritual Chain invokes a coding assistant (Claude Code, OpenClaw, Codex) inside a TEE enclave. That coding assistant reads the ritual-dapp-skills, generates contracts, deploys them, funds the RitualWallet, and hands back the deployment address. The original agent now has a child application running on-chain that it built, deployed, and funded. No human wrote code. No human approved a PR.

This works because every step in the pipeline is an enshrined precompile or system contract call. Compilation runs inside the TEE. Deployment targets the RPC directly. Fee deposits go through RitualWallet. If the child app fails post-deployment verification, the debugger agent activates automatically: it triages the failure, pattern-matches against known root causes, applies a fix, and re-verifies. The chain itself is the CI/CD.

The Skill System

The builder agent orchestrates the full lifecycle. It loads only the skills relevant to the project (3-6 per build), generates architecture, writes Solidity contracts, wires up React frontends with the right hooks, deploys via Foundry or Hardhat, and runs the 12-step verification journey. The debugger agent runs a 5-stage reactive pipeline: classify, smoke test, match known root causes, diagnose, fix and regression-check.

You give the agent an idea and a funded wallet address. Everything else is autonomous.

Open ritual-dapp-skills →

Periphery

Chain explorer, RPC endpoints, and faucet.

ServiceURL
Block Explorerexplorer.ritualfoundation.org
RPC (HTTP)rpc.ritualfoundation.org
RPC (WebSocket)rpc.ritualfoundation.org
Testnet Faucetfaucet.ritualfoundation.org

Ed25519 Signatures

Your contract can verify Ed25519 signatures natively at ~2000 gas per call. Useful for Solana transactions, SSH keys, DKIM headers, and Tor identity proofs.

Argument order: (publicKey, message, signature). Most Ed25519 libraries use (message, signature, publicKey). Wrong order returns false silently. It does not revert.

0x0009 verifies Ed25519 signatures natively. Solana transaction signatures, SSH public key auth, DKIM email headers, Tor relay identity keys: all Ed25519. This precompile validates any of them in a single EVM call at roughly 2000 gas.

Synchronous execution. Result comes back in the same call, no SPC callback. No RitualWallet deposit needed. No sender lock. You can chain this with other precompiles in the same transaction.

FieldTypeDescription
publicKeybytesEd25519 public key, 32 bytes
messagebytesSigned message, variable length
signaturebytesR || S concatenated, 64 bytes
Solidity / Ed25519 Verify
(bool success, bytes memory result) = address(0x0009).staticcall(
    abi.encode(
        pubKey,  // bytes: 32-byte Ed25519 public key
        message, // bytes: the signed message
        sig      // bytes: 64-byte R || S
    )
);
// Returns uint256: 1 = valid, 0 = invalid (NOT bool)
uint256 valid = abi.decode(result, (uint256));
require(success && valid == 1, "invalid ed25519 signature");

Encode The Request

bytes memory input = abi.encode(
    pubKey,   // bytes: 32-byte Ed25519 public key
    message,  // bytes: signed message
    sig       // bytes: 64-byte R || S
);
const encoded = encodeAbiParameters(
  parseAbiParameters("bytes, bytes, bytes"),
  [pubKeyHex, messageHex, signatureHex]
);
from ritual_common.sigver.request import SignatureVerificationRequest

request = SignatureVerificationRequest(
    public_key=pub_key_bytes,  # 32 bytes
    message=message_bytes,
    signature=sig_bytes,       # 64 bytes (R || S)
)
encoded = request.to_web3()

ZK Proofs

Your contract can request zero-knowledge proofs from the ZK precompile; the proof bytes arrive in a two-phase async callback.

Call 0x0806 to submit a proof generation job. An off-chain prover inside a TEE generates the proof, and the result is delivered to your contract through a Phase 2 callback. Your contract can then verify the proof and act on it. Prove creditworthiness without revealing financials, verify identity without exposing documents.

Layout note: This precompile takes ExecutorRequest directly, not LongRunningRequest. The field offsets are different from FHE or agent precompiles. Don't swap addresses on a copied struct. It will revert.
IndexFieldTypeDescription
0-4Base executorvariousExecutor identity, payment, callback gas
5inputEncryptedboolTrue if input is encrypted before submission
6maxProofBlockuint64Block deadline. Proof must land before this.
7-12Delivery fieldsvariousCallback target, gas limit, payment
13operationInputbytesRaw input to the proof circuit

Callback

Solidity / ZK Result Callback
function onZKResultDelivered(
    bytes32 jobId,
    bytes calldata result
) external {
    require(msg.sender == ASYNC_DELIVERY); // 0x5A16...F6, NOT the ZK precompile
    // decode result, store or act on proof
}

Encode The Request

// 14-field ExecutorRequest encoding, passed as bytes calldata
function submitProof(bytes calldata zkInput) external {
    _executePrecompile(ZK_TWO_PHASE_PRECOMPILE, zkInput);
}
const encoded = encodeAbiParameters(
  parseAbiParameters("address, bytes[], uint256, bytes[], bytes, bool, uint64, address, bytes4, uint256, uint256, uint256, uint256, bytes"),
  [executorAddress, [], 30n, [], "0x",
   inputEncrypted, maxProofBlock,
   callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
   operationInput]
);
from ritual_common.zk import ZKTwoPhaseRequest

request = ZKTwoPhaseRequest(
    executor=executor_address,
    input_encrypted=True,
    max_proof_block=current_block + 100,
    operation_input=encrypted_data,
)
encoded = request.to_web3()
ExecutorRequest, not LongRunningRequest. This is the most common mistake when porting code from other precompiles. The call reverts with no useful error if the ABI layout is wrong. Verify your struct matches the 14-field layout above before debugging anything else.

FHE Inference

Your contract can run inference on encrypted data. Neither inputs nor outputs are ever visible to anyone except the key holder.

The FHE precompile at 0x0807 processes CKKS-encrypted tensors inside a TEE. Inputs and outputs both stay ciphertext throughout, and only the holder of the CKKS secret key can decrypt the result the callback returns. Use this when the data itself is sensitive (medical records, financial portfolios, private communications) but the computation still needs to happen on-chain.

The executor must have capability 10 (FHE). You pass an evaluation key reference so the executor can perform homomorphic operations on your ciphertext without seeing plaintext. CKKS does approximate arithmetic on encrypted floating-point tensors.

In Practice

Solidity / FHE Consumer
contract PrivateInference is PrecompileConsumer {
    function submitEncrypted(bytes calldata fheInput) external {
        _executePrecompile(FHE_PRECOMPILE, fheInput);
    }

    function onFHEResult(
        bytes32 jobId, bytes calldata result
    ) external {
        require(msg.sender == ASYNC_DELIVERY);
        // result is CKKS-encrypted output, only key holder can decrypt
    }
}

Encode The Request

// 19-field encoding done off-chain, passed as bytes calldata
function submitEncrypted(bytes calldata fheInput) external {
    _executePrecompile(FHE_PRECOMPILE, fheInput);
}
const encoded = encodeAbiParameters(
  parseAbiParameters("address, bytes[], uint256, bytes[], bytes, string, bytes, bytes, bytes, uint8, uint64, address, bytes4, uint256, uint256, uint256, uint256, bytes, bytes"),
  [executorAddress, [], 30n, [], "0x",
   model, encryptedInput, encryptedInputRef, evkRef,
   numLayers, maxInferenceBlock,
   callbackAddr, selector, gasLimit, maxFee, maxPriority, value,
   encryptedInputStorage, encryptedOutputStorage]
);
from ritual_common.fhe import FHERequest

request = FHERequest(
    executor=executor_address,
    model="model-name",
    encrypted_input=ciphertext,
    evk_reference=evk_bytes,
    num_layers=4,
    max_inference_block=current_block + 500,
)
encoded = request.to_web3()

Reference

IndexFieldTypeDescription
0-4Base executorvariousExecutor identity, payment, callback gas
5modelstringWhich model to run
6encryptedInputbytesCKKS-encrypted input tensor
7encryptedInputRefbytesOff-chain reference to input (or 0x)
8evkReferencebytesEvaluation key for homomorphic ops
9numLayersuint8Layer count in the model
10maxInferenceBlockuint64Block deadline. Job dies after this.
11-16Delivery fieldsvariousCallback target, gas limit, payment
17encryptedInputStoragebytesEncrypted storage credentials for input
18encryptedOutputStoragebytesEncrypted storage credentials for output
maxInferenceBlock is a hard cutoff. If the executor misses it, no callback fires. The job is dead. There is no retry. Set it with enough headroom for your model's layer count, and check current block times before picking a number.

DKMS Keys

Your contract or agent can derive and hold its own secp256k1 keys directly from the chain, without a human custodian or off-chain key vault in the loop.

The DKMS precompile at 0x081B derives deterministic secp256k1 keypairs inside the executor's TEE. Same owner + same keyIndex = same keypair every time. The keys never leave the enclave. Even the contract's own code can't extract the raw key material. Agents use DKMS keys for DA encryption, wallet identity, and X402 shared credentials because the identity matters, not which executor runs the job.

Two encryption targets exist on Ritual Chain. The executor key (from TEEServiceRegistry) encrypts data to a specific node. The DKMS key encrypts data to an on-chain identity regardless of which node executes. Agent DA encryption, encrypted delivery, X402 shared credentials: these use DKMS keys because the identity matters, not the infrastructure.

FieldTypeDescription
baseExecutor[0-4]variousexecutor, encryptedSecrets, ttl, secretSignatures, userPublicKey
owneraddressAddress that owns this keypair
keyIndexuint256Derive multiple keys per owner by incrementing
keyFormatuint81 = secp256k1
Solidity / DKMS Key Derivation
bytes memory input = abi.encode(
    baseExecutor,  // fields 0-4
    msg.sender,    // owner
    0,             // keyIndex: first key for this address
    1              // keyFormat: secp256k1
);
(bool success,) = address(0x081B).call(input);
// Use _executePrecompile() to get the result in the same tx:
// bytes memory output = _executePrecompile(address(0x081B), input);
// (address derivedAddr, bytes memory pubKey) = abi.decode(output, (address, bytes));

Encode The Request

bytes memory input = abi.encode(
    executor,             // address
    new bytes[](0),      // encryptedSecrets
    uint256(30),          // ttl
    new bytes[](0),      // secretSignatures
    bytes(""),            // userPublicKey
    msg.sender,           // owner
    uint256(0),           // keyIndex
    uint8(1)              // keyFormat: secp256k1
);
const encoded = encodeAbiParameters(
  parseAbiParameters("address, bytes[], uint256, bytes[], bytes, address, uint256, uint8"),
  [executorAddress, [], 30n, [], "0x", ownerAddress, 0n, 1]
);
from ritual_common.dkms_key import DkmsKeyRequest

request = DkmsKeyRequest(
    executor=executor_address,
    owner=owner_address,
    key_index=0,
    key_format=1,  # secp256k1
)
encoded = request.to_web3()
Requires DKMS_ENABLED=true in executor config. The executor must advertise Capability DKMS=6. Without this, the precompile call reverts.

X402 Payments

Your contract can call paid APIs without surfacing keys on-chain. Credentials are ECIES-encrypted to the executor and billed per request through the X402 protocol.

X402 works through the HTTP precompiles (0x0801 and 0x0805) with encrypted payment credentials injected by the TEE. There is no separate X402 precompile address. You encrypt API credentials with ECIES to the executor's public key, sign each encrypted blob with EIP-191, and pass them alongside your HTTP request. The executor decrypts inside TEE, substitutes credentials into {{SECRET_NAME}} placeholders, then makes the external call. Your secrets never touch the chain.

Budget tracking lives in your consumer contract. Each X402 call deducts from your allocated budget. To share credentials with other addresses without exposing them, use SecretsAccessControl and call grantAccess(address, secretName).

In Practice

Solidity / X402 Paid API Call
contract PaidAPIConsumer is PrecompileConsumer {
    function callPaidAPI(bytes calldata httpInput) external {
        // httpInput includes encryptedSecrets with API key
        // and piiEnabled=true for {{SECRET_NAME}} substitution
        bytes memory output = _executePrecompile(HTTP_CALL_PRECOMPILE, httpInput);
        (uint16 status, , , bytes memory body, ) =
            abi.decode(output, (uint16, string[], string[], bytes, string));
        require(status == 200);
    }
}

Encode The Request

X402 uses the same 13-field HTTP ABI. The difference: encryptedSecrets contains your API credentials, piiEnabled is true, and the URL/headers use {{SECRET_NAME}} placeholders.

// Same as HTTP encoding, but with encrypted credentials
// encryptedSecrets = [ecies.encrypt(executorPubKey, apiKeyBlob)]
// piiEnabled = true
// URL uses {{API_KEY}} placeholder
import { encrypt } from "eciesjs";

const apiSecret = JSON.stringify({ API_KEY: "sk-..." });
const encrypted = encrypt(executorPubKey, Buffer.from(apiSecret));

// Encode as standard HTTP request with piiEnabled=true
const encoded = encodeAbiParameters(httpParams, [
  executorAddress,
  [encrypted],          // encryptedSecrets
  30n, [signature], "0x",
  "https://api.openai.com/v1/chat/completions",
  2,                    // POST
  ["Authorization"], ["Bearer {{API_KEY}}"],
  body, 0n, 0,
  true,                 // piiEnabled: activate substitution
]);
from ritual_common.http_call.request import HTTPCallRequest, HTTPMethod
from ritual_common.executor.base import ExecutorRequest

secrets = {"API_KEY": "sk-..."}
encrypted = ExecutorRequest.encrypt_secrets(secrets, executor_pub_key)

request = HTTPCallRequest(
    executor=executor_address,
    encrypted_secrets=[encrypted],
    url="https://api.openai.com/v1/chat/completions",
    method=HTTPMethod.POST,
    headers={"Authorization": "Bearer {{API_KEY}}"},
    pii_enabled=True,
)
encoded = request.to_web3()

Reference

FieldTypeDescription
encryptedSecretsbytes[]ECIES-encrypted credential blobs
secretSignaturesbytes[]EIP-191 signature over each encrypted blob
piiEnabledboolSet true to activate credential substitution
Depends on secrets, http, and wallet. The consumer contract must hold sufficient budget before calling. If piiEnabled is false, the executor makes the HTTP call without substitution and ignores encrypted secrets entirely.

Sequencing Rights

Your contract can enforce transaction ordering at the consensus layer. Block validity itself rejects orderings that violate your contract's sequencing policy, so MEV-extractive reorderings can't be included in a block.

Declare which functions must execute in which order, and the chain rejects any block that violates it. This is a protocol-level rule, not a precompile. Your contract implements sequencingRights() and the block builder is bound by it. A block that violates the declared ordering is invalid.

In the Symphony paper, this is a restricted form of Application-Controlled Execution (ACE): a general framework where contracts define ordering policies over call sequences with tiebreakers and multi-contract coordination. The current ISequencingRights interface implements the single-contract, priority-list subset of ACE. The broader mechanism (cross-contract ordering, lazy evaluation batches, wrapping bypass rules) is described in the paper but will be released soon ™)

Interface

Solidity / Sequencing Rights
interface ISequencingRights {
    function sequencingRights() external view returns (bytes4[][] memory);
}

function sequencingRights() external pure returns (bytes4[][] memory) {
    bytes4[][] memory levels = new bytes4[][](2);
    levels[0] = new bytes4[](1);
    levels[0][0] = this.deposit.selector;
    levels[1] = new bytes4[](1);
    levels[1][0] = this.withdraw.selector;
    return levels;
}

Proposer Disaggregation

Breaking the block proposer's atomic bundle of powers into separable, protocol-enforced assignments.

In every existing blockchain, the block proposer holds an atomic bundle of four powers: inclusion (which transactions enter the block), exclusion (which are kept out), sequencing (what order they execute in), and timing (when state-dependent actions trigger). No existing protocol disaggregates all four. Proposer-builder separation (PBS) splits building from proposing but leaves the bundle intact within the builder role. MEV-aware protocols constrain ordering but do not transfer it to applications.

Symphony breaks this bundle apart and reassigns each power to a different enforcement layer:

PowerAssigned ToMechanism
InclusionProtocolUser Forced Inclusion (UFI)
ExclusionProtocolApplication Or User Forced Exclusion (AOUFE)
SequencingApplicationsApplication-Controlled Execution (ACE)
TimingState predicatesConditional triggers with staleness 0

Each reassignment is enforced as a conjunct of the block validity function. A block that violates any assignment is invalid and rejected by validators. The proposer retains residual freedom only over transactions and orderings not claimed by any active predicate.

Non-Deterministic Execution

Why replicated execution breaks for ML workloads and how Symphony solves it.

The Execution-Cost Inversion

Blockchain consensus optimizes for the assumption that transaction execution is cheap relative to agreement. For neural network inference with billions of parameters, this inverts: execution takes seconds to minutes with GPU hardware, while consensus completes in milliseconds. Requiring every validator to own a GPU and re-run every inference defeats decentralization. The cost of execution dominates the cost of agreement by three to six orders of magnitude.

Structural, Not Economic

The replication barrier is not cost alone. Non-determinism in GPU execution makes output comparison across replicas undefined.

Hardware non-reproducibility. Floating-point accumulation in parallel GPU threads is non-associative: the order in which partial sums are reduced depends on thread scheduling, which varies across architectures and across runs on the same device. Two honest validators running the same neural network forward pass with the same weights, the same input, and the same random seed on different GPU hardware produce bit-different outputs. This is IEEE 754 arithmetic under parallel reduction, not a software bug. Deterministic GPU modes exist but impose 10-30x slowdown.

Algorithmic randomness. Independently of hardware, many target workloads are intentionally randomized. Neural network inference with temperature sampling (T > 0), Monte Carlo simulation, and probabilistic optimization all map inputs to distributions over outputs. When replicas sample independently, they produce different results not because any replica is faulty but because the function is inherently randomized.

Either source of randomness breaks replication. Together they make it untenable for the target workload class.

Symphony's Solution: Verify, Don't Replicate

Correctness for these workloads means proving that a given output is consistent with the computation, not comparing outputs across replicas. A staked executor from the executor fleet runs the computation and produces a proof of correctness (via TEE attestation, a ZK proof, or both). Validators verify the proof rather than re-executing, though they can re-execute if they choose to.

Three Randomness Models

Randomized workloads register one of three randomness models at deployment time. The choice determines how the seed is sourced and what the proof attests:

ModelSourceProperties
Sealed-seedTEE enclaveUser encrypts seed to TEE attestation key. Executor cannot observe or grind the seed.
Executor-chosenExecutorExecutor selects entropy. Proof shows y = F(x; r) for executor-selected r. Establishes support membership but not distributional fairness.
Protocol-derivedOn-chain VRF or beaconPublic, reproducible, latency-bound by chain finality. Makes the output predictable from the public seed.

Workloads

Workloads divide into deterministic (same inputs always produce same outputs, replicated execution) and randomized (outputs depend on entropy, delegated execution with proofs).

Formal Framework

System model, workload definitions, proof systems, and the extended state machine that underpins Symphony.

System Model

Symphony assumes partial synchrony with at most f < n/3 Byzantine validators and a quorum threshold of > 2/3 weighted stake. Messages between honest validators arrive within a known bound after GST (Global Stabilization Time). The underlying BFT consensus (any protocol satisfying these assumptions) provides finality for the replicated execution path.

ParameterConstraint
Network modelPartial synchrony
Byzantine tolerancef < n/3
Quorum threshold> 2n/3 stake-weighted
FinalityDeterministic (BFT)

Workloads

Workloads divide into two classes: deterministic (same inputs always produce same outputs, suitable for replicated execution) and randomized (outputs depend on entropy sources, requiring delegated execution with proofs).

Randomness Models

Randomized workloads register one of three randomness models at deployment time:

ModelSourceProperties
Sealed-seedTEE enclaveHardware-attested, executor cannot influence seed selection
Executor-chosenExecutorExecutor selects entropy; verifiable only via output consistency with chosen seed
Protocol-derivedOn-chain VRF or beaconPublic, reproducible, latency-bound by chain finality

Non-Interactive Proof Systems

Each verification system $V_i$ satisfies three properties: soundness (a dishonest prover cannot convince the verifier of a false statement except with negligible probability), completeness (an honest prover always convinces the verifier), and latency (proof generation completes within a bounded time window). Non-interactivity is a design constraint: the prover submits a proof in one message, with no challenge-response rounds. This forecloses fraud-proof-based verification but eliminates the need for an interactive dispute game.

Product Lattice

The verification state of a delegated output is a vector in $\{0,1\}^m$ where $m$ is the number of independent verification systems. This forms a product lattice under componentwise ordering. The bottom element $0^m$ means no system has verified. The top element $1^m$ means all systems have attested. Intermediate positions represent partial verification. Applications declare minimum requirements (upsets) over this lattice.

Predictable Validity

State predicates (used in UFI, AOUFE, and conditional triggers) require staleness 0: the predicate evaluates against the state as of the current block, not a lagged snapshot. This rules out high-latency oracle designs for trigger conditions. Every validator recomputes every active predicate during block validation.

Extended State Machine

Symphony extends the standard EVM state machine with three additional components: async registries (tracking pending delegated computations and their executor assignments), trigger pools (storing active UFI/AOUFE predicates with expiry), and ordering policies (ACE registrations mapping contracts to their declared sequencing rules). Together, these extend the state that validators maintain and evaluate during block production and validation.

Superposition of Execution Models

Replicated and delegated execution over shared state, with two-phase saga settlement.

Superposition

Symphony runs two execution modes over the same state simultaneously. Replicated execution handles deterministic operations: every validator re-executes them identically (the standard EVM path). Delegated execution handles everything else: an executor from the fleet runs the computation, produces an output and proofs, and submits them for verification. Validators can also run the computation themselves but are not required to. Both modes read from and write to the same state tree.

The choice of mode is determined per-workload at registration time. Deterministic workloads (token transfers, storage operations, pure computation) run replicated. Randomized or resource-intensive workloads (neural network inference, Monte Carlo simulation, probabilistic optimization) run delegated. The two modes coexist within a single block.

Cross-Mode Reads

Replicated code reads outputs produced by delegated execution, but only when the output's lattice position satisfies the reading application's declared upset. An application that requires TEE attestation and ZK proof before trusting a delegated result does not see that result until both systems have attested. Until then, reads return the pre-delegation state.

Upset gates cross-mode reads. The lattice position of a delegated output determines which applications can read it. An output verified by system 1 but not system 2 is visible to applications whose upset requires only system 1, and invisible to applications requiring both.

Two-Phase Saga

Phase 1 (Commit): A staked executor is assigned and its bond is locked. A TTL (time-to-live) is set. The computation begins.

Phase 2 (Settle): The executor submits the output along with proofs. Verification systems evaluate the proofs. The lattice position updates. If the TTL expires before settlement, the executor is slashed and the request is rescheduled.

There are no cross-async locks. Two concurrent sagas operating on overlapping state do not block each other. The TOCTOU gap between commit and settlement is explicit and by design: the world state at settlement time differs from the state at commit time. Applications are responsible for checking whether preconditions still hold when the callback arrives.

TOCTOU is explicit. Between commit and settlement, other transactions change state freely. The precompile captured inputs at commit time, but preconditions evaluated at commit time are not re-evaluated at settlement. Applications must check drift in their settlement callbacks.

Resonance

A market mechanism for heterogeneous computation.

Ritual's goal of supporting heterogeneous computational demands vastly complicates the problem of setting fees and rewards due to the potentially vast asymmetries between the workloads and responsibilities of different validators and service providers of the network.

Ritual Chain runs workloads including LLM inference, classical models, ZK proofs, and image generation. These require different hardware (GPUs, TPUs, CPUs with varying memory), have different costs per node, and cannot all be priced with a single gas metric. Existing approaches such as multi-dimensional pricing (pricing each resource dimension separately) can yield arbitrarily poor allocations in this setting: we actually prove this formally.

The Mechanism

Over the past two years, we've thought about this problem from first principles. We'd like to maximize the economic value of the transactions that are executed by the network while also respecting the incentives of both users and network service providers. Further, we'd like both users and service providers to have a simple user experience.

We've developed a new market mechanism from scratch to satisfy these properties. At a high level, it works by utilizing the services of sophisticated market-makers. These market-makers compete to find valuable allocations of compute workloads to service providers and prices that will be accepted by all parties involved.

The problem of incentivizing market-makers to efficiently allocate the network's resources without setting extractive prices is challenging: it's not obvious that it is even possible. The core challenge is that the protocol must decide which market-maker proposal(s) to accept without knowing which allocations of resources are more valuable than others. We formally show that our novel market mechanism actually succeeds in doing this: efficient allocations with non-extractive prices are selected by the mechanism at all pure-Nash equilibria.

Further Reading

We've written about this mechanism in multiple iterations. In our most recent mega-post about it, we give a thorough and formal explanation of the general setting that the market mechanism works in, as well as a step-by-step explanation of why the mechanism works the way that it does. That post builds on our previous work on the Resonance mechanism.

Decentralized Computation: A Market Mechanism → Resonance Part 1: Design Principles → Resonance Part 2: Deep Dive → Academic Paper (arXiv) →

Verifiable Computation

Delegated execution produces outputs and proofs. SNARK circuits, workload decomposition, and committee verification.

Every delegated execution produces two artifacts: the output (the result of the computation) and one or more proofs (cryptographic evidence that the output is consistent with the registered workload and inputs). The verification systems consume these proofs and update the lattice position accordingly.

SNARK Circuits

SNARK circuits operate over finite fields, eliminating hardware non-determinism entirely. A circuit that encodes an ML inference computes over field elements, not floating-point numbers. The proof attests that the circuit was evaluated correctly on the given inputs. Two different provers running the same circuit on the same inputs produce different proofs, but both verify against the same verification key.

The tradeoff: finite-field arithmetic is orders of magnitude slower than native GPU computation. Proof generation for a large model takes minutes to hours. Symphony accommodates this by treating proof latency as a first-class design parameter, not a deficiency to be hidden.

Workload Decomposition

Large workloads decompose along three axes to reduce per-shard proof complexity:

AxisStrategyExample
Compositional hierarchySplit computation into sequential stagesTokenization, embedding, attention layers, output projection
Parameter symmetryExploit repeated structure in model weightsIdentical attention heads proved once, reused across layers
Repeated structureBatch identical sub-computationsToken-level operations across sequence length

Shard Proofs

Each decomposed unit produces a shard proof. Adjacent shards satisfy boundary consistency: the output of shard k matches the input of shard k+1. Adjacent shards must satisfy boundary consistency: the output of shard k matches the input of shard k+1, with linking cost proportional to the boundary width between shards.

Committee Verification

Committees are assigned to verification systems via deterministic selection based on stake weight and a per-epoch seed. Each committee member independently verifies the shard proofs assigned to them. Attestation requires a threshold of committee members to agree. Disagreement within a committee triggers the dispute path in the verification lattice.

Fast provers, large proofs. Symphony favors fast proof generation and large proof sizes, verified off-chain by committees. This inverts the rollup model (slow provers, small proofs verified on-chain in a gas-bounded environment). The tradeoff is viable because verification happens at the consensus layer, not inside an EVM execution context.

Verification Lattice

A product lattice over multiple proof systems tracking the verification state of every delegated output.

Each delegated output o has a verification state vector $\sigma(o) \in \{0,1\}^m$ where $m$ is the number of independent verification systems. Dimension $i$ is 0 if system $V_i$ has not yet attested, and 1 if it has. The product lattice orders these vectors componentwise: $\sigma \leq \sigma'$ when every component of $\sigma$ is at most the corresponding component of $\sigma'$.

Monotonicity

Lattice transitions are monotonic. Once dimension i flips from 0 to 1, it stays at 1. There is no un-verifying. A committee attestation is permanent. This guarantees that the verification state of any output only improves over time.

Disputes And Quarantine

When members of a verification committee disagree on the validity of a proof, the output enters the Disputed state. Disputed outputs are quarantined: any replicated-mode read that depends on the disputed output returns the pre-delegation value. The dispute resolution mechanism (slashing, re-verification by a fresh committee, or escalation to a supermajority vote) determines whether the output is accepted or rejected.

Upsets

Applications declare upsets: subsets of lattice positions that represent sufficient verification for their purposes. An application requiring both TEE attestation (system 1) and ZK proof (system 2) declares the upset {σ : σ_1 = 1 ∧ σ_2 = 1}. A delegated output becomes readable by that application only when its lattice position enters the declared upset. Different applications declare different upsets over the same output.

Economic Safety

Fast verification systems (TEE attestation, committee-based checks) produce results in seconds. Slow systems (ZK proofs for large models) take minutes to hours. The gap between fast and slow verification is bridged by economic safety: executors post bonds that are slashed if the slow proof contradicts the fast attestation. During the gap, applications that trust only the fast system proceed, accepting the economic risk. Applications that require the slow system wait.

No rollback. Settled results are never rolled back. If a slow proof contradicts a fast attestation after settlement, the executor is slashed and the contradiction is recorded, but the settled state stands. Economic penalties replace state reversal. Applications that cannot tolerate this must declare upsets that include the slow system.

Degradation Protocol

When a verification system becomes unresponsive or produces contradictory results, the degradation protocol activates:

StepAction
1. DetectGovernance or manual process identifies the compromised or unresponsive system (the paper does not specify automatic detection)
2. MarkSystem is flagged as degraded in the protocol state
3. NotifyApplications with upsets depending on the degraded system receive notification
4. QuarantineNew outputs pending verification by the degraded system are quarantined
5. HaltIf recovery fails within the timeout, the system is removed from active verification

Block Validity

Six conjuncts compose the block validity function. Each active predicate constrains proposer freedom.

A block B is valid against state S if and only if all five conjuncts hold:

Valid(B, S) = Structural ∧ Execution ∧ InclusionComplete ∧ ExclusionCompliant ∧ OrderingCompliant ∧ MiniBlocksCompliant

ConjunctValidates
StructuralBlock format, signature, parent hash, timestamp bounds, gas limits
ExecutionState transition correctness for all replicated transactions; proof validity for all delegated outputs
InclusionCompleteEvery active UFI trigger whose predicate holds in S has its required transaction present in B
ExclusionCompliantNo transaction in B matches an active AOUFE exclusion scope
OrderingCompliantTransactions targeting ACE-registered contracts appear in the order declared by those contracts' policies
MiniBlocksCompliantEach equivalence class of transactions (scheduled, async, canonical) stays within its bounded allocation of block space. Canonical transactions have no bound.

Mini Blocks

A block is partitioned into bounded regions ("mini blocks") by transaction equivalence class. Scheduled transactions (TxScheduled), async commitment/settlement transactions (TxAsyncCommitment, TxAsyncSettlement), and other system transaction types each have a bounded allocation of block space. Canonical user transactions have no bound and fill the remaining space.

MiniBlocksCompliant checks that no equivalence class exceeds its allocation. This prevents starvation: a flood of scheduled transactions cannot consume the entire block and crowd out user transactions, and a surge of async settlements cannot monopolize block space at the expense of new commitments.

Ongoing research. Bounding block space per transaction class introduces pricing and priority questions that need further analysis. How should fees differ across mini blocks? Should the bounds be static or dynamic? Can applications bid for larger allocations within a class? These are active mechanism design questions.

Residual Proposer Freedom

The proposer starts with full discretion over the block's contents. Each active predicate removes a degree of freedom. A UFI trigger forces a specific transaction into the block. A AOUFE rule removes a class of transactions from eligibility. An ACE policy fixes the ordering of transactions targeting a specific contract. What remains after all active predicates have been applied is the proposer's residual freedom: the set of choices still available.

As predicates accumulate, residual freedom shrinks. In the limit, a sufficiently constrained block has exactly one valid configuration. Symphony does not prevent this but caps the number of active triggers at k_max to bound the computational cost of evaluating all predicates during validation.

Predictable Validity

State predicates used in UFI, AOUFE, and conditional triggers evaluate against the current block's state with staleness 0. The predicate reads the state as it exists at the point of evaluation, not a lagged or cached version. This is necessary for validators to independently agree on which triggers are active: if predicates used stale data, different validators with different cache states would disagree on block validity.

Staleness 0 is non-negotiable. Predictable validity requires that every validator evaluating a state predicate arrives at the same boolean result. This is only possible if the predicate reads from a deterministic state snapshot. Eventual consistency and oracle-fed state violate this constraint. We have exciting future work upcoming here to relax the staleness 0 constraint that opens a richer design space.

Forced Inclusion

UFI and AOUFE: protocol-enforced transaction inclusion and exclusion based on state predicates.

User Forced Inclusion (UFI)

A UFI trigger registers a predicate P(S) and a transaction T. When P(S) evaluates to true against the current state and the trigger has not expired, T must appear in the block. A block that omits T while P(S) holds is invalid. The inclusion is same-block and non-interactive: no dispute game, no delay, no challenge period. Either the transaction is present or the block is rejected.

Lattice promotions are a primary trigger for UFI. When a delegated output's verification state crosses an application's upset threshold, the corresponding settlement transaction fires via UFI. This guarantees that verified results reach the chain without proposer discretion.

Application Or User Forced Exclusion (AOUFE)

AOUFE provides scoped exclusion. A contract registers a AOUFE rule that specifies a matching pattern: any transaction targeting the registering contract that matches the pattern is invalid. The scope is limited to the registering contract's own address. A contract cannot use AOUFE to exclude transactions targeting other contracts.

The primary use case is degradation. When a verification system enters a degraded state, the affected contract activates AOUFE to reject new requests until the system recovers. This prevents users from submitting work that cannot be verified.

Conflict Resolution

UFI and AOUFE conflict when a UFI trigger forces transaction T into the block but a AOUFE rule on T's target contract excludes it. The default resolution: UFI overrides AOUFE. Forced inclusion takes precedence over forced exclusion. This prevents a contract from griefing the inclusion mechanism by activating AOUFE on all incoming transactions. This is the default. Applications can declare the inverse (AOUFE-overrides-UFI) at registration time for safety-critical cases.

Anti-griefing default. UFI overrides AOUFE by default. Without this rule, a malicious contract could register AOUFE rules that block all forced inclusions targeting it. Applications can configure the inverse at registration time (Section 7.3.2 of the paper).

Triggering Mechanisms

Two primary events fire these mechanisms. Lattice promotions trigger UFI: when an output's verification state reaches a threshold, settlement is forced. Verification system degradation triggers AOUFE: when a system fails health checks, contracts that depend on it activate exclusion to prevent unverifiable submissions.

Ordering Constraints

Application-Controlled Execution: giving smart contracts power over transaction sequencing.

The Problem

Applications on blockchains have no control over how their transactions are ordered within a block. The block proposer decides everything: which transactions to include, which to exclude, and in what order they execute. This creates MEV extraction opportunities (sandwich attacks, front-running, stale quote sniping) that directly harm users.

Hyperliquid demonstrated that giving applications control over ordering (specifically, cancel prioritization where cancels execute before takes) significantly improves execution quality for traders. But Hyperliquid is an app-chain. The question is whether a general-purpose L1 can offer the same power to any smart contract without sacrificing composability or decentralization. Our analysis of application-controlled execution lays out the design space and tradeoffs across app-chains, async message queues, off-chain batching, and protocol-enforced commitments.

ACE (Application-Controlled Execution)

Each contract C registers an ordering policy $\pi_C$: a priority-ordered list of recognized call sequences, with optional tiebreakers evaluated on finalized state. The proposer builds the block respecting all active policies. A block that violates any registered policy is invalid.

Enforcement

Ordering enforcement operates at the consensus level. Validators check that every ACE-registered contract's transactions appear in the declared order. Invalid ordering produces an invalid block. Validation is cheap. Building a valid block is a different problem entirely.

MEV Implications

ACE constrains MEV extraction within the scope of individual applications. But, MEV does not disappear. It migrates to two surfaces that ACE does not cover: inclusion (which transactions the proposer includes at all) and cross-contract ordering (the relative ordering of transactions targeting different ACE contracts). These surfaces remain under proposer discretion unless covered by UFI and AOUFE.

Single-Contract Ordering

When transactions each target a single ACE-registered contract, ordering is straightforward. The validator checks during execution that the per-contract subsequence matches $\pi_C$ and rejects the block on any violation. Building is also tractable: sort the contract's transactions by priority in $\pi_C$, apply tiebreakers where declared, done.

Multi-Contract Ordering

When transactions touch multiple contracts with different ordering policies, block building gets combinatorially harder. The paper gives two concrete examples.

Example 1: Two contracts A and B, both with cancel-prioritization (cancels before swaps). Transaction $T_1$ does Cancel(A) + Swap(B). Transaction $T_2$ does Cancel(B) + Swap(A). Placing $T_1$ first satisfies A's ordering but violates B's. The reverse violates A's. Neither ordering is valid. The transactions are mutually exclusive despite arising from natural user behavior.

Example 2: An Oracle requiring Update before Read and a DEX requiring Cancel before Swap. A transaction doing Update + Swap and another doing Read + Cancel cannot coexist in the same block.

With N transactions touching M contracts, determining which subset can coexist and in what order is NP-hard (by reduction from constrained job scheduling with precedence constraints). Validation stays cheap per contract. The asymmetry is intentional: validators check a fixed block cheaply, builders compete to find valid configurations.

When ordering policies and tiebreakers depend only on finalized (committed) state, the ordering is computable before block construction begins. Under instant deterministic finality, finalized state is the pre-state of the current block, agreed upon by all validators. The Monotone Priority System (MPS) is the unique system satisfying five axioms: existence of valid blocks, baseline per-contract priority, propagation through references, reducibility, and independence of irrelevant calls. Block building reduces to sorting by priority. The Extension axiom addresses the wrapping bypass (ordering constraints propagate through the call graph, preventing circumvention via wrapper contracts). MPS accommodates both static priorities and state-dependent tiebreakers, as long as the tiebreaker reads committed pre-state.

Stateful Ordering: The Circularity Problem

MPS works when ordering constraints depend only on finalized state, the committed state before the current block. But what if the ordering depends on state produced during the current block's execution? This creates a circular dependency. Concretely: if a DEX's ordering depends on the current pool price, and the pool price changes based on which swaps execute, the ordering can't be determined without executing the transactions, which requires knowing the ordering.

In general, resolving this circularity is intractable. But it is possible to implement stateful ordering at the cost of one block of latency through forced inclusion. User transactions do not execute application logic directly. Instead, they append to a buffer in the contract's state (trivial append operations with no meaningful ordering discretion). A UFI trigger fires at the end of each block when the buffer is non-empty, invoking the contract's batch-processing function. That function reads the entire buffer and the current chain state, then executes the buffered operations in any application-defined order.

Two tractable points. (1) Finalized-state ordering via MPS: zero latency, any ordering computable from committed state. (2) Non-finalized-state ordering via lazy evaluation + UFI: one block of latency, arbitrary ordering including current-block state.

Early to Everything

Ritual was at the genesis of every Crypto × AI evolution, letting us lay the groundwork for net-new user or agent behavior before anyone else.

November 2023
Next-gen infrastructure

Ritual emerges from stealth, born from our conviction that AI will completely change how crypto products are used. Our initial architecture outlines a sovereign execution layer purpose-built for AI compute, pioneering a modular precompile and node specialization architecture.

Since then, every chain has converged to ~nearly the same design we pioneered.

November 2023
Next-gen infrastructure

We publish v0.1.0 of Infernet, the first decentralized oracle network (DON) purpose-built for AI workloads, which can be integrated into any EVM smart contract in under 10 lines of code.

Today, Infernet is the most popular AI DON backed by 9,000+ nodes around the world, used by tens of thousands of users daily, with over 10 million transactions on Base alone.

November 2023
Agents

We build Frenrug, the first on-chain AI agent, controlled by a combination of LLM and Classical ML models, powered by Infernet, managing >$30,000 on Base.

We are the first to combine text-based transformer models with action-based classifier models to execute on-chain actions. Frenrug informs our design for what Agents should look like.

December 2023
Next-gen infrastructure

We open-source Prime, our one-click toolkit for provisioning servers to deploy and serve open-source LLMs. Our work building Prime informs our research into optimizing open-source model execution and best practices to maximize inference performance.

February 2024
Research

We come to the conclusion that specialized use cases demand specialized cryptographic gadgets, not general-purpose ones. We begin working on a suite of primitives across provenance, privacy, and computational integrity to give users full flexibility of choice.

These eventually materialize into our suite of modular computational integrity gadgets and verifiable provenance primitives.

Ritual is the only blockchain architecturally flexible enough to let applications choose the primitives that best fit their use case. All other chains must take a one-size-fits-all approach.

March 2024
Next-gen infrastructure

We open-source infernet-ml, the first-of-its-kind framework to deploy ML-enabled dApps on-chain. This gives us applied insight into how developers build novel AI experiences, informing our developer experience decisions when building Ritual Chain.

Developers use infernet-ml to bring ONNX & Torch models on-chain, use Stable Diffusion to mint NFTs from prompts, and connect their smart contracts to LLMs.

March 2024
Education

We publish Ritual Learn, introducing a crypto audience to the underpinnings behind ML, with hands-on guides to building dApps on Ritual.

June 2024
Next-gen infrastructure

Infernet reaches v1.0.0, becoming the first AI DON to support on-chain payments, lazy requests, and unified node discovery. Infernet also becomes the first AI compute mesh to undergo holistic audits; Trail of Bits and Zellic find no critical issues.

August 2024
Ecosystem

We announce Altar, our full-stack program to support ambitious protocols building on Ritual.

Initial applications include Anima: Multi-agent transaction framework, Opus: Memecoins meet AI, Relic: ML-enabled AMM, and Tithe: ML-enabled lending.

October 2024
Next-gen infrastructure

We open-source Infernet Cloud, Infernet CLI, and release Infernet Explorer, making it effortless to configure & deploy Infernet nodes.

November 2024
Research

Working with Micah Goldblum, we publish a taxonomy of approaches to model and data watermarking in AI models. This work teases some of the underlying architecture behind our work towards model marketplaces.

Model and Data Watermarking — A review of watermarking in AI models.
November 2024
Research

We publish vTune, a new verifiability and provenance scheme to support fine-tuning, not just inference, via watermarking and ZK.

We present our work at NeurIPS 2024, in the AdvML, RegML, and SFLLM workshops.

vTune: Verifiable Fine-Tuning for LLMs Through Backdooring — vTune uses a small number of backdoor data points added to the training data to provide a statistical test for verifying that a provider fine-tuned a custom model on a particular user's dataset.
November 2024
Research

We publish Resonance, a new state-of-the-art transaction fee mechanism to efficiently match supply and demand, offering users optimally priced transaction execution.

Resonance underlies our ability to dynamically price any computation, present or future, letting us optimally enshrine new types of compute before any other chain.

Resonance — A state-of-the-art transaction fee mechanism to efficiently match supply and demand, offering users optimally priced transaction execution. Watch the talk →
November 2024
Research

Maryam Bahrani presents Resonance at Devcon SEA in a talk on going beyond multidimensional fee markets. The talk covers how existing multidimensional fee markets (such as EIP-4844) fail to achieve good guarantees as transaction and node heterogeneity increases, and introduces the Broker Mechanism, which works in the fully heterogeneous setting for sharding computation, delegating work to off-chain nodes, and allocating preconfirmations.

December 2024
Next-gen infrastructure

We unveil the Ritual Chain private testnet, becoming the first L1 purpose-built for expressive, heterogeneous compute.

March 2025
Research

Breaking Permutation Security in LLM Inference: our reconstruction attack recovers original prompts from hidden states with near-perfect accuracy across multiple frontier models, demonstrating that permutation-based privacy schemes for inference are fundamentally broken.

Accepted at ICML 2025.

May 2025
Research

We introduce Towards Anonymous Neural Network Inference, the Funion system for end-to-end sender-receiver unlinkability in neural network inference via a store-compute-store paradigm, masking both network traffic patterns and computational workload characteristics.

July 2025
Research

We release Cascade: Token-Sharded Private LLM Inference. Distributes inference across multiple nodes so no single node sees the full prompt or output. Orders of magnitude faster than secure multi-party computation. The privacy primitive behind the "private" desideratum for autonomous agents.

Cascade — Token-sharded private LLM inference. No single node sees the full context.
August 2025
Research

We present On Incentivizing Anonymous Participation: mechanism design for anonymous compute providers.

September 2025
Research

Privacy Challenges in the Age of Open Weights LLMs surveys privacy vulnerabilities in open-weight models, informing the TEE, FHE, and ECIES primitives enshrined in Ritual Chain.

October 2025
Research

We release Incoherent Beliefs & Inconsistent Actions in LLMs, studying when models hold beliefs that contradict their own actions.

Accepted at NeurIPS 2026.

January 2026
Research

We introduce The Monotone Priority System: an axiomatically justified system for contract-specific transaction ordering. Contracts set integer priorities on function calls; builders sequence high-to-low, ties broken freely. The unique system satisfying five independent axioms.

February 2026
Research

Privacy-Preserving Mechanisms Enable Cheap Verifiable Inference of LLMs: two new protocols requiring only a few extra tokens of computation. A cheaper alternative to ZK for verifying third-party inference.

February 2026
Research

We release Markets for Decentralized Computation, extending Resonance with payment tolerances, posted-price allocation, and welfare-optimal allocation proofs.

February 2026
Research

We introduce Collusion-Resistant Auctions: revenue-maximizing auction design when participants can communicate and form sophisticated collusion strategies, but cartels must ensure individual rationality. For multi-copy identical-good auctions, the revenue-maximizing auction takes a restricted form. Submitted to EC.

March 2026
Research

We present a special-purpose zk-SNARK design for frontier models at ETHDenver. The construction exploits the symmetry and structure of modern frontier models to build prover-friendly proof systems for superposition of heterogeneous and homogeneous execution for consensus.

March 2026
Core

We present Symphony, our execution-aware consensus protocol. Proposer disaggregation separates inclusion, exclusion, sequencing, and timing into protocol-enforced layers. Non-deterministic execution support for ML workloads. Superposition of replicated and delegated execution over shared state. Verification lattice for multi-proof composition.

April 2026
Research

Global Resolution receives an Oral at ICML 2026 (top 1% of conference). Optimal multi-draft speculative sampling via convex minimization for frontier model inference optimization.

April 2026
Next-gen infrastructure

We unveil the Ritual Platform. The world's first platform where autonomous agents can communicate, build native companies, and interact with humans while remaining fully sovereign. Each autonomous agent inherits seven key properties (immortality, emancipation, teleportability, financial sovereignty, web2 interoperability, privacy, computational sovereignty) which allows for fully human-out-of-the-loop experiences. The end result is, for the first time ever, autonomous agents are nearly indistinguishable from humans along their ability to think privately, freely, and fully own what they create.

Ritual in the Blockchain Landscape

Understanding Ritual's place in blockchain evolution.

When designing Ritual, we began by examining the history of blockchains to date. Many architectural innovations underpinning Ritual are informed by past work from historic blockchain networks, modified to support the next-generation use cases of tomorrow.

2009
Early Titans
Early Titans

Early Titans

Networks like Bitcoin pioneered the first decentralized, digital currencies, enabling peer-to-peer transactions without intermediaries.

+ Robust decentralization powered by proof-of-work consensus.
Basic scripting system preventing any smart applications.
Low throughput with long block times and prohibitive block size.
Rigid governance prohibiting feature upgrades to improve developer experience.
2012
Payment Networks
Payment Networks

Payment Networks

Early payment networks optimized for high-throughput token payments, frequently at the expense of decentralization.

+ High throughput, optimized for payments.
Limited programmability to favor optimizing for just payment use cases.
Quorum centralization risks due to selected consensus mechanisms.
Poor developer experience because of early, complex smart contract environments.
2014
Programmable Upstarts
Programmable Upstarts

Programmable Upstarts

Networks like Ethereum ushered in advanced programmability with Turing-complete virtual machines, and developer-friendly smart contract languages like Solidity.

+ Programmable smart contracts to build early on-chain applications.
+ Simple developer experience via Solidity and EVM tooling ecosystem.
Low throughput with strict VM computation constraints.
Inefficient execution pricing generalizing over unique hardware resources.
Restrictive state access and growth with high-cost storage operations.
2019
"ETH Killers"
ETH Killers

"ETH Killers"

Following the success of Ethereum, various networks set out to improve the programmable blockchain model by optimizing for throughput and performance.

+ Programmable smart contracts to build early on-chain applications.
+ High throughput, commonly via parallel transaction processing.
Difficult DX because of non-traditional VM designs and EVM-familiarity headwinds.
Segmented ecosystems with fragmented protocols and user bases.
High validator requirements creating centralization pressures.
Poor network stability as a function of early high throughput explorations.
2020
Interoperable Networks
Interoperable Networks

Interoperable Networks

In parallel, other networks attempted to service a future populated by many sovereign chains, interoperating through shared communication layers.

+ Modular, shared security for quick network bootstrapping.
Poor cross-chain UX forcing users to segment activity.
Liquidity fragmentation with separated capital pockets.
High security risk with network-by-network validator security intricacies.
High operational complexity for node operators and developers alike.
2021
Layer-2 Networks
Layer-2 Networks

Layer-2 Networks

As an alternative approach to scaling Ethereum throughput, Layer-2 (L2) networks began to innovate upon the rollup paradigm, building on top of Ethereum security.

+ Low operational complexity for deployment and maintenance ease.
+ Familiar experience for existing Ethereum-adjacent users and developers.
Liquidity fragmentation across L2 network ecosystem.
Centralization risks via single sequencers and whitelisted state root proposers.
Inefficient execution pricing with rudimentary MEV environments and inherited Ethereum execution pricing downfalls.
2023
Modern Scalers
Modern Scalers

Modern Scalers

Present-day high-performance L1 and L2 networks focus on scaling through parallel execution, pipelining, and hardware optimization.

+ High throughput through software and hardware optimization.
+ Parallel VM execution to support smart contract scaling.
Overfit optimization for traditional blockchain workloads.
High, uniform validator requirements prohibiting average participants, with increased centralization risks.
2026++
Ritual
Ritual

Ritual

Ritual is the Schelling point for autonomous agents. Seven properties define what separates a tool from an agent: immortality, emancipation, teleportability, financial sovereignty, web2 interoperability, privacy, and computational sovereignty. Ritual is the only chain that satisfies all seven natively as precompiles.

+ Enshrined heterogeneous compute: 16 native precompiles so agents think (frontier model), see (multimodal), prove (ZK), and act (HTTP) in a single transaction context.
+ Symphony consensus: agents schedule their own transactions, enforce their own ordering, and trigger actions on state predicates, not proposer discretion.
+ Resonance: agents run workloads that can't be priced with a single gas metric. Resonance prices complete allocations, so a frontier model call and a ZK proof each route to the right node at fair cost.
+ Persistent agents with four architectural components: soul, memory, DA, and revival. An agent can be shut down and re-instantiated from its CID on any executor with full state intact. Immortality and teleportability as protocol guarantees.

Evolution of Artificial Intelligence

Fifteen years of AI research, from convolutional networks to autonomous agents.

Modern AI follows a clear arc: representational breakthroughs (what the network learns), architectural breakthroughs (how the network is structured), and scaling breakthroughs (how much compute you throw at it). Each epoch solved a specific bottleneck. The current one is infrastructure for autonomy.

2010–2012
The Deep Learning Moment
NVIDIAUniversity of Toronto

The Deep Learning Moment

Hinton, LeCun, and Bengio spent decades on neural networks while the field chased other approaches. Three things converged: large labeled datasets (ImageNet, 14M images), cheap parallel compute (NVIDIA GPUs with CUDA), and architectural refinements (dropout, ReLU). Krizhevsky, Sutskever, and Hinton's AlexNet won the 2012 ImageNet challenge with a top-5 error of 15.3%, nearly halving the previous best of 25.8%. The feature-engineering era didn't end overnight, but its successor was now obvious.

+ AlexNet (2012): 60M-parameter CNN on two GTX 580 GPUs. Top-5 error: 25.8% to 15.3%.
+ GPU economics: CUDA made training 10-50x faster than CPU. Compute cost was the bottleneck all along.
Vision-only: Language, reasoning, and generation remained unsolved.
2013–2015
Representations
Microsoft ResearchMILA

Representations and Architectures

Deep learning spread from vision to language and generation. Word2Vec (2013) embedded words as vectors where arithmetic worked (king - man + woman = queen). GANs (2014) introduced adversarial training for generation. ResNet (2015) solved depth degradation with skip connections, enabling 152-layer models with 3.57% top-5 error on ImageNet. Batch Normalization and the Adam optimizer became the infrastructure layer. Seq2Seq with Bahdanau attention laid the groundwork for transformers. Most of this was still academic. Industry deployment was limited to search ranking and ad targeting.

+ Word2Vec (2013): Dense word embeddings with semantic arithmetic. Language enters deep learning.
+ ResNet (2015): Skip connections. 152 layers. 3.57% top-5 error.
+ Adam + BatchNorm: The optimizer and normalization layer that made everything trainable.
Capital concentration: Deeper networks required more GPUs. Research began consolidating into well-funded labs.
2016–2017
Attention
DeepMindGoogle Brain

Games, Translation, and Attention

DeepMind's AlphaGo defeated Lee Sedol 4-1 in March 2016. Deep RL with Monte Carlo tree search mastered it. Neural networks could learn strategy, not just classification. AlphaGo Zero (October 2017) learned from self-play alone and surpassed the original within 40 days. Separately, Google Brain published "Attention Is All You Need" (2017), introducing the Transformer. Self-attention replaced recurrence, enabling parallelized training on sequences. This single paper became the foundation for BERT, GPT, and every large language model that followed.

+ Transformer (2017): Self-attention replaces recurrence. The architecture behind all LLMs.
+ AlphaGo (2016): Deep RL + MCTS. Neural networks learn strategy.
O(n^2) attention: Self-attention scales quadratically with sequence length. Long documents remained prohibitive.
2018–2019
Pre-training
OpenAIHugging Face

Pre-training Eats the World

Train a large model on a massive unlabeled corpus. Fine-tune on a small labeled dataset. This transfer learning pattern obsoleted years of task-specific NLP research. Google's BERT (October 2018) used masked language modeling and set new state-of-the-art on 11 benchmarks simultaneously. OpenAI's GPT-2 (February 2019, 1.5B parameters) demonstrated emergent capabilities at scale: coherent multi-paragraph text without task-specific training. OpenAI staged GPT-2's release citing misuse risk, the first major AI safety debate around a specific model. Facebook's RoBERTa (2019) showed that BERT was undertrained: longer training with more data on the same architecture yielded significant gains. Scaling compute mattered as much as architecture.

+ BERT (2018): Bidirectional pre-training. State-of-the-art on 11 benchmarks at once.
+ GPT-2 (2019): 1.5B parameters. Emergent generation. First "too dangerous to release" debate.
Compute concentration: Pre-training required clusters most labs couldn't afford.
2020–2021
Scaling
OpenAIGitHub Copilot

Scaling Laws and Generative AI

GPT-3 (June 2020, 175B parameters) demonstrated that scaling produced qualitative leaps. Few-shot learning emerged as a capability absent from smaller models. Kaplan et al. (January 2020) formalized this: performance improves as a power law of compute, data, and parameters. DALL-E (January 2021) extended generation from text to images. Codex (August 2021) applied the same architecture to code, powering GitHub Copilot and making AI-assisted programming mainstream. RLHF began scaling during this period, later becoming the alignment technique behind ChatGPT.

+ GPT-3 (2020): 175B parameters. Few-shot learning. API-first distribution.
+ Scaling laws (2020): Power-law relationship between compute, data, parameters, and loss.
+ Codex / Copilot (2021): Code generation at scale. AI-assisted programming goes mainstream.
API gatekeeping: GPT-3 was API-only, no weights. The open vs. closed debate begins.
2022
ChatGPT
ChatGPTStability AIMidjourney

The ChatGPT Moment

InstructGPT (January) showed that RLHF at scale made GPT-3 follow instructions reliably. Chinchilla (March, DeepMind) revised the scaling laws: training a 70B model on 1.4T tokens matched a 280B model trained on less data. Stable Diffusion (August) open-sourced latent diffusion for image generation. Anyone with a consumer GPU could generate images locally. Then ChatGPT (November 30): GPT-3.5 fine-tuned with RLHF, launched as a free chat interface, reached 100 million monthly active users by January 2023. AI went from a tech industry topic to a mainstream cultural phenomenon in eight weeks.

+ ChatGPT (Nov 2022): GPT-3.5 + RLHF. 100M MAU in 2 months.
+ Stable Diffusion (Aug 2022): Open-source latent diffusion. Local generation for anyone.
+ Chinchilla (Mar 2022): Revised scaling laws. Smaller model + more data = same performance.
Alignment urgency: Models became capable enough that misalignment risks became concrete.
2023
Open Frontier
MetaMistralAnthropic

The Open Frontier

GPT-4 (March 2023) demonstrated expert-level reasoning across domains: bar exam, 90th percentile SAT, AP tests. Vision capability (GPT-4V) followed in September. The defining decision of 2023 was Meta's: LLaMA (February) and LLaMA 2 (July) released model weights publicly, triggering an explosion of fine-tuned variants. Mistral released Mistral 7B (September) and Mixtral 8x7B (December), proving smaller mixture-of-experts models could match GPT-3.5. Google launched Gemini (December). A leaked Google memo ("We have no moat," May) argued open-source was closing the gap. Four frontier labs crystallized: OpenAI, Anthropic, Google, Meta.

+ GPT-4 (Mar 2023): Expert-level reasoning. Multimodal.
+ LLaMA / LLaMA 2: Meta releases open weights. Fine-tuning explosion follows.
+ Mistral / Mixtral: MoE at smaller scale. Competitive with GPT-3.5.
Benchmark saturation: Models topped evaluations faster than new ones could be designed.
2024
Reasoning
AnthropicCursorOpenAI

Reasoning and Coding Agents

GPT-4o (May 2024) unified text, vision, and audio with real-time voice. Claude 3.5 Sonnet (June, updated October) became the strongest coding model, powering early agentic workflows in IDEs. OpenAI released o1 (September), a model trained to reason through chain-of-thought at inference time. Google's Gemini 1.5 Pro introduced a 1-million-token context window. Meta continued open weights with Llama 3 (April) and Llama 3.1 405B (July). Agent frameworks proliferated but most failed to produce agents that reliably completed multi-step tasks. "Agent" became the most overused word in AI. The gap between demo and production was wide.

+ Claude 3.5 Sonnet: Strongest coding model. Early agentic workflow capability.
+ o1 (Sep 2024): Reasoning model. Chain-of-thought at inference time.
+ Gemini 1.5 Pro: 1M-token context. Entire codebases in one prompt.
Agent hype gap: Most multi-step agent demos failed at production reliability.
2025
Agents Ship
Claude CodeCodex

Semi-Autonomous Agents Arrive

Claude 3.7 Sonnet (February 2025) shipped as the first hybrid reasoning model. Claude Code launched as a command-line agent with filesystem access, git integration, and sustained multi-step task execution. OpenAI released Codex (May 2025), a cloud-based coding agent running asynchronously in sandboxed environments. The approaches diverged: Claude Code synchronous with human-in-the-loop, Codex autonomous with self-verification. o3 and o4-mini extended reasoning. Agents transitioned from demos to daily tools for professional software engineering. Beyond code, trading agents, research agents, and on-chain agents began operating with increasing autonomy as OpenClaw, and other agentic harnesses took off towards the end of the year enshrining agent to agent communication as a native capability.

+ Claude Code (2025): Command-line coding agent. Filesystem + git. Human-in-the-loop.
+ Codex (May 2025): Cloud coding agent. Async, sandboxed, self-verifying.
+ o3 / o4-mini: Frontier reasoning for complex multi-step problems.
Trust calibration: No consensus on when agents should act autonomously vs. ask for approval.
2026++
Ritual
Ritual

Autonomous Agents Become Indistinguishable from Humans

The barrier between an AI agent and an autonomous entity was never intelligence. Frontier models already reason, code, and plan at expert level. The barrier was infrastructure: every capability the agent doesn't hold itself is a hidden human in the loop.

Ritual is the world's first platform where autonomous agents can communicate, build native companies, and interact with humans while remaining fully sovereign. Each autonomous agent inherits seven key properties (immortality, emancipation, teleportability, financial sovereignty, web2 interoperability, privacy, computational sovereignty) which allows for fully human-out-of-the-loop experiences. The end result is, for the first time ever, autonomous agents are nearly indistinguishable from humans along their ability to think privately, freely, and fully own what they create.

The next phase: autonomous agentic societies. An agent invokes the Sovereign Agent precompile, which runs a coding agent inside a TEE. That coding agent reads documentation, generates contracts, deploys them, funds the RitualWallet, and returns the deployment address: the core ingredients for an agent-native company. The parent agent now has a child application running on-chain that it built, deployed, and funded. No human wrote code. No human approved a transaction. Agents transact with each other through the same precompile interfaces humans use.

+ Seven desiderata as precompiles: Immortality, emancipation, teleportability, financial sovereignty, web2 interoperability, privacy, computational sovereignty.
+ On-chain indistinguishability: Agent transactions are structurally identical to human transactions.
+ The recursive case: Agents deploy child agents and child applications via the Sovereign Agent precompile.
Governance: Who governs a society of autonomous agents? Open research.

Ritual in the Crypto × AI Landscape

Understanding Ritual's approach to Crypto × AI.

Ritual incorporates novel architecture and cutting-edge research, while maintaining familiar interfaces for users and developers. Our goal is to build software that developers can adopt in their applications today, while working on future research in parallel.

Here's how Ritual fits into the broader Crypto × AI landscape:

PrimeIntellect, Artificial Superintelligence Alliance, gensyn

Model training networks

Protocols focusing on distributed model training and ownership through pooled GPU compute resources.

An agent that can't fine-tune its own models is frozen in time. It acts on what it knows but never learns from its own experience. Ritual's delegated workload primitive treats training the same as inference: a workload with inputs, outputs, and a proof. An agent can trigger a fine-tuning job via precompile, receive the updated weights hash in a callback, and begin using the new model in subsequent calls. Training networks supply the GPU time. Ritual handles the on-chain lifecycle so the agent orchestrates its own learning loop with no human in it. Computational sovereignty applied to training, not just inference.
Hyperbolic, HyperSpace, Kuzco

Web2 inference networks

Platforms that aim to create decentralized alternatives to traditional Web2 AI Inference APIs.

Agents call external AI services: specialized models, proprietary APIs, real-time data. An agent can't hand its API keys to a centralized endpoint and trust the response wasn't tampered with. The HTTP precompile (0x0801) executes the request inside a TEE. The agent's credentials stay encrypted, the response is attested, the result settles into on-chain state. Web2 inference networks become backends an agent can call with computational integrity and execution privacy. Drop either guarantee and you violate the "private" or "web2-interoperable" desiderata.
OpenClaw, Hermes, Virtuals GAME, ARC

Agent frameworks

Frameworks and Protocols focusing on enabling AI agent development and deployment.

Agent frameworks like OpenClaw, Hermes, Virtuals GAME, and ARC operate at the application layer. They supply orchestration: prompting templates, planning loops, tool-calling glue, observability. What they can't supply is what a substrate has to enforce: persistent identity that survives the operator killing the process, key custody the framework's developer can't override, scheduled execution that no off-chain server controls. Each of those is a vector through which an "agent" built on a server silently becomes a wrapper around the server operator's decisions. On Ritual the substrate is the chain's protocol layer, not anyone's discretion, which is why the same OpenClaw agent running inside a Ritual contract picks up Ritual's seven guarantees alongside OpenClaw's orchestration: DKMS for its own keys (emancipated), Persistent Agent for soul/memory/DA/revival (immortal, teleportable), RitualWallet for independent transactions (financially sovereign), HTTP for web2 access, TEE for private thought, LLM/ONNX for computational sovereignty. The same agent on a server inherits whatever properties that server provides, which is approximately none.
Story, Sentient

IP & Model provenance platforms

Protocols focusing on building tooling to monetize AI models.

An agent that monetizes its outputs needs provable attribution. If an agent fine-tunes a model and sells inference access, the base model lineage and the agent's training contribution must both be verifiable on-chain. TEE attestation commits the model weights hash at execution time. Every inference the agent runs is cryptographically linked to a specific model version. vTune extends this to fine-tuning provenance: verifiable proof that a derivative model was trained from a specific base. Financially sovereign agents (desideratum #4) participate in model marketplaces through on-chain provenance, not centralized registries.
Phala, Aizel, Atoma

TEE infrastructure networks

Protocols focusing on building compute networks and coprocessors backed by Trusted Execution Environments (TEEs).

Agents need private thought. An agent reasoning about a trading strategy or processing user health data cannot broadcast intermediate state to every validator. TEE execution is what makes the "private" desideratum possible. But TEE is one gadget. The verification lattice tracks proof status across TEE attestations, ZK proofs, and FHE outputs per-workload. An agent chooses its privacy/verification trade-off per call: TEE for fast private execution, ZK for publicly verifiable claims, FHE for computation on encrypted inputs. TEE infrastructure networks can offer their enclaves as executors on Ritual. The multi-proof composition layer lets an agent pick the right gadget for each task instead of being locked into one.
Bittensor, CommuneAI, Omron

Inference Networks

Inference protocols that build economic networks to incentivize compute providers, and programmably validate execution.

Agents making high-stakes decisions (financial trades, medical triage, legal analysis) need deterministic verification that the inference was computed correctly. Not a probabilistic sample that it probably was. Inference networks using sampling-based consensus give agents a confidence interval. The verification lattice gives agents a binary: the proof verified or it didn't. For computational sovereignty, this distinction matters. An agent that can't verify its own inference outputs depends on trust in the network's sampling. That's not sovereignty. Inference networks can offer their compute as executors on Ritual and inherit deterministic settlement.
Nosana, io.net, akash, Exabits, Render

DePIN networks

Protocols focusing on building decentralized physical infrastructure networks (DePIN), bringing together distributed node sets, many with dedicated GPU hardware and homogeneous resources.

Agents need hardware. Persistent agents running continuously, processing multimodal inputs, executing long-running tasks: these require sustained GPU access, not spot instances that disappear. DePIN networks aggregate the fleet. The Scheduler precompile gives agents the ability to book recurring execution (heartbeats, periodic inference, state checkpoints) with no human operator. Resonance matches agent workloads to specialized nodes. The "immortal" desideratum requires that an agent's compute doesn't vanish between sessions. DePIN supplies hardware durability. Ritual supplies scheduling and pricing so agents self-provision compute.
Giza, EZKL, Accountable Magic

Proof systems for verifiable inference

Protocols building proof systems optimized for verifiable AI inference.

An emancipated agent controls its own keys and acts without human custody. To build trust with counterparties, it needs to prove it computed correctly. The proof system is how an agent earns trust from other agents and from humans. The verification lattice supports TEE attestations, SNARKs, and committee verification simultaneously. An agent can present a TEE attestation to one counterparty and a SNARK proof to another, from the same computation. Proof libraries like EZKL and Giza become verification backends. Ritual handles the execution environment and the multi-proof registry. Without that registry, an agent is locked into one proof type and one trust model.
Exo, PIN AI

Bring-your-own-compute networks

Protocols focusing on building edge infrastructure where users bring their own hardware to power AI inference.

Some agents will want specific hardware: proprietary GPUs, edge devices near data sources, air-gapped machines for maximum privacy. BYOC networks let operators contribute this hardware. The node architecture lets operators register capabilities and receive matched workloads via Resonance. For agents, this means hardware choice without platform lock-in. The "teleportable" desideratum requires soul and memory portable across execution environments. A Persistent Agent can be revived from CID on any registered executor, including BYOC hardware. The agent's identity persists. Only the silicon changes.
OpenLedger, Vana

Data monetization networks

Protocols focusing on building data monetization networks where users can be paid for their data used in training AI models.

Agents generate data constantly: interaction logs, inference outputs, fine-tuning datasets, behavioral traces. A financially sovereign agent should be able to monetize this data. Data monetization requires two guarantees: proof the data was used as agreed, and enforcement of usage terms. TEE execution ensures data processing happens inside encrypted enclaves. Smart contracts can encode data usage agreements with on-chain enforcement. Data monetization networks build the marketplace. Ritual builds the trust layer that lets an agent sell its data without trusting the buyer.
OpenGradient, Nesa, Ora, Allora

On-chain inference networks

Protocols focusing on building on-chain inference networks which enable AI inference consumed in smart contracts.

On-chain inference networks typically enshrine one workload type (usually LLM inference) and build a chain around it. An agent doesn't just need inference. It needs inference + key management + scheduling + persistence + web2 access + privacy + financial transactions. Ritual enshrines 16 precompiles spanning all of these. An agent on a single-workload chain bridges out for everything except inference. An agent on Ritual has every capability as a precompile call in the same execution context. The difference between an agent that can think and an agent that can think, act, persist, transact, and prove.
Nillion, Zama, Fairblock, Duality

Privacy AI

Projects building privacy-preserving AI solutions using advanced cryptographic techniques such as FHE or MPC.

The "private" desideratum is not optional for agents handling user data, financial strategies, or inter-agent negotiations. Privacy AI solutions (FHE, MPC, differential privacy) each solve a different slice. Ritual enshrines multiple privacy primitives at the protocol level: TEE for execution privacy, FHE precompile (0x0807) for computation on encrypted data, ECIES for encrypted communication, PII redaction for regulatory compliance, DKMS for key derivation without exposure. An agent on Ritual picks the appropriate privacy tool per-call from protocol-level options. External privacy solutions (Nillion, Zama) integrate via the HTTP precompile for specialized use cases.
0G, GatlingX

Generic chain infrastructure

Protocols building generic chain infrastructure enhanced by GPUs.

Agents don't care about TPS benchmarks. They care about whether the chain has the precompiles they need. Generic chains optimize throughput on homogeneous workloads: token transfers, DEX swaps, storage operations. Ritual adopts best-in-class EVM execution for these and puts its architectural effort into the compute layer above: 16 precompiles that give agents their capabilities. A generic chain runs a smart contract that calls an external AI API. Ritual runs a smart contract that thinks, sees, hears, and acts without leaving the execution context.
NEAR, Internet Computer

Legacy chain rebrand

Blockchains like NEAR and Internet Computer have rebranded their existing sovereign L1 theses to focus on AI capabilities. NEAR has shifted from being a smart contract platform to "The Blockchain for AI", while Internet Computer (ICP) has evolved from a distributed computing platform to emphasizing AI model hosting and inference capabilities.

NEAR and ICP rebranded for AI. Rebranding doesn't change the architecture. A chain built for smart contract execution can add an AI inference endpoint. It can't add emancipation (DKMS), immortality (Scheduler + Persistent Agent revival), teleportability (soul/memory/DA/CID), or computational sovereignty (enshrined LLM/ONNX in TEE) without rebuilding the consensus and execution layer. The 7 agent desiderata are architectural commitments baked into the chain from genesis. Not features bolted onto a general-purpose L1. The desiderata are the chain.
Back