AI & LLM-Resistant Protection · Since 2004
Anti-LLM JavaScript obfuscation, in every paid tier.
Maximum-mode output regenerates its decoder shape, identifier prefix, and constant-pool encoding on every build. ChatGPT, Claude, and Copilot have no fixed signature to pattern-match. Published pricing from $29/month — no sales call required.
Anti-LLM in every paid tier
Per-build polymorphic decoder. ChatGPT, Claude, Copilot — no fixed signature to learn.
22 years of releases
Continuous shipping since 2004. Built by Richscripts Inc, Toronto, Canada.
10× bytes-per-dollar
1 GB at $29 vs. ~100 MB at the comparable $19 tier elsewhere.
2004
First shipped. Continuous releases for 22 years.
Heritage
$29 = 1 GB
Entry tier with anti-LLM Maximum mode included.
Pricing
Per-build
Polymorphic decoder regenerated every release.
Anti-LLM
4 paths
Online, desktop GUI, hosted API, npm CLI.
Workflow
The 2026 Question Buyers Are Asking
AI assistants pattern-match against transform shapes they have already seen. The fixed-output obfuscators that were good enough in 2020 are now training data. Here's how Maximum mode is structurally different.
Open-source javascript-obfuscator output is in every modern LLM's training corpus. Models have learned the inverse transform and can apply it to any release that uses the same fixed shape.
Maximum mode emits a different decoder shape on every release. The LLM that "solved" version 2.0 sees a structurally different program in 2.1 and has to start over. It can't accumulate progress.
Encrypted string pool, flat control flow, identifier prefix unique per build. The literals an LLM normally uses to recognize libraries, API calls, and intent are all gone — replaced by opaque indices into a runtime-decoded pool.
Find Your Fit
The right amount of protection depends on what an attacker actually wants from your code. Pick one and the recommendation appears below.
Recommended: Standard preset. Variable renaming and string encoding stop the casual copy-and-paste copycat. The Free tier is enough — no credit card required. Upgrade only if your usage grows beyond 200 MB/month.
Open Standard Preset ›
Recommended: Maximum preset. Per-build polymorphic decoder, encrypted constant pool, flat-transformed control flow, self-defending wrapper. The threshold an attacker has to cross becomes uneconomical for most non-state actors. Available in every paid tier from $29/month.
Open Maximum Preset ›
Recommended: Maximum preset + a runtime monitoring suite. Maximum mode raises the cost of analysis; runtime monitoring catches tampering when it happens. Server-side authority on anything that grants access. Obfuscation is one layer in a layered defense, never the only one.
Pricing
No sales call. No "contact us for pricing." Maximum-mode anti-LLM protection is included from the entry tier up — never gated to a higher plan.
Bytes per dollar at the entry tier
10× the bytes per dollar. Anti-LLM Maximum mode included from $29.
$/month
A no-cost starting point for basic obfuscation workflows.
- 200 MB monthly quota
- 20 files per request
- Name mangling
- String encoding
Get Started Free
$/month
Anti-LLM Maximum mode for individual developers shipping active products.
- 1 GB monthly quota
- 50 files per request
- 10 MB max file size
- Compression included
- All Free features
Get Started ›
Most Popular
$/month
The balance of capacity and advanced controls most teams pick.
- 3 GB monthly quota
- 1000 files per request
- 30 MB request size
- Replace globals + protect members
- Dead code insertion
Choose Corporate ›
$/month
Higher limits for teams with larger release pipelines.
- 9 GB monthly quota
- 3000 files per request
- 120 MB max file size
- Best compression ratio
- All Corporate features
Get Started ›
Compare all four plans in detail ›
The Question Every Procurement Reviewer Asks
"Can ChatGPT, Claude, or Copilot reverse-engineer this output?"
Short answer: no, for output that regenerates its decoder shape every build. Yes, for static obfuscators in their training data. The mechanism that makes the difference: per-build polymorphism removes the fixed signature LLMs rely on.
We wrote the long-form technical answer — what AI assistants can actually do today, where they break down structurally, and the specific transforms (encrypted constant pool, flat control flow, polymorphic decoder) that defeat their pattern-matching approach.
Read the full answer ›
// What an LLM sees on a Maximum-mode build:
// (function(){var _0xa3=_dec(0x4a); ... })();
//
// What it CAN'T tell you:
// - what _dec(0x4a) decodes to
// - which case the state machine runs first
// - what the original variable names were
// - what the strings are
//
// All of that lives behind the decoder,
// which only runs at runtime.
Procurement-Ready
Source-handling, release validation, compliance vocabulary, and workflow-to-policy fit are all documented. Your security team can review without scheduling a discovery meeting first — and that's the point.
-
✓
Submitted JavaScript is processed in server memory only. Temporary upload artefacts removed after each request. Desktop workflow keeps source on the workstation throughout.
-
✓
GDPR data minimisation, OWASP A04/A09, PCI DSS 4.0 client-side script integrity, HIPAA PHI-adjacent code, NIST SSDF release-protection — all addressed on one page.
-
✓
No SOC 2 or ISO 27001 today. We publish what we do have: source-handling behaviour, release validation, support channels, and the local workflow path for projects that need source to stay local.
Desktop App
For PHI-adjacent web apps, on-prem build pipelines, or any project where the JavaScript source code can't be uploaded to a third-party server — use the desktop GUI. Batch process entire projects, including JavaScript embedded in HTML, PHP, ASP, ASPX, and JSP. Generate a deterministic command line you can check into your release pipeline.
Three Ways To Start
Try the playground in 30 seconds for an experiment. Download the desktop app for batch protection on PHI-adjacent or on-prem code. Or read the long-form technical analysis if you're still deciding whether anti-LLM protection is what you actually need.