Four pieces. They compose. You adopt the ones you need.
What NexArt is
NexArt is verifiable execution infrastructure for AI systems. It produces tamper-evident records of what your AI actually ran.
A lightweight integration captures inputs, outputs, and execution context at runtime and seals them into a single record locally.
An independent attestation node certifies each record so that no single party controls the proof.
Certification adds a third-party signature. The record can then be verified by anyone without trusting the operator.
verify.nexart.io is a public verification surface. Anyone can check a record there with no account, no login, and no dependency on NexArt.
A Project Bundle groups multiple execution steps into one verifiable artifact, covering entire workflows or agent sessions.
NexArt proves what executed. It does not prove that the output is correct or that the model behaved well.
NexArt ships four products. They compose: each one adds value on its own, and they work together when combined. Start with the certification engine; add context, workflow helpers, and field verification as your needs grow.
Without these pieces, every AI execution in your stack is a transient event. If a decision is questioned later, logs are not proof. Most systems rely on internal records that cannot be verified externally. The record you need does not exist until you create it.
@nexart/ai-execution: The certification engine
What it is: The core SDK. Drop it into any Node.js codebase and start producing Certified Execution Records in under five minutes. Handles sealing, verification, attestation, project bundles, and redaction.
What it unlocks:
Every AI call your product makes can produce a tamper-evident record.
Records verify offline, with no NexArt service dependency.
Optional attestation by a NexArt node adds an independent third-party signature.
When to add: Day one. This is the entry point.
Adoption shape: One function call per model inference. No middleware, no proxy, no infrastructure change.
What it is: A small, framework-agnostic SDK that captures the evidence that fed a model run: the tools the model called, the documents it retrieved, the upstream decisions, the inputs from your pipeline.
What it unlocks:
CERs go from "model said X" to "model said X, having seen exactly Y, Z, and W."
The upstream evidence is hashed into the same certificateHash, so the whole chain is one tamper-evident unit.
When to add: As soon as your product has tool use, retrieval, or any non-trivial upstream pipeline.
NexArt ships integration patterns for the most common ways teams build with AI today.
LangChain: every chain run becomes a verifiable record. Multi-step chains become project bundles.
n8n: make every LLM step in your business workflows defensible.
How adoption typically works
Most teams start with @nexart/ai-execution and add the other pieces as their needs grow. See the adoption journey for a stage-by-stage guide with time-to-value estimates.
Most teams assume their existing instrumentation is sufficient until they are challenged. This gap exists even in well-instrumented systems.
As more decisions are automated, the window to instrument them narrows. Retroactively certifying past executions is not possible. The evidence starts when you start capturing it.