AI & LLM-Resistant Protection · Since 2004

Anti-LLM JavaScript obfuscation, in every paid tier.

Maximum-mode output regenerates its decoder shape, identifier prefix, and constant-pool encoding on every build. ChatGPT, Claude, and Copilot have no fixed signature to pattern-match. Published pricing from $29/month — no sales call required.

Anti-LLM in every paid tier Per-build polymorphic decoder. ChatGPT, Claude, Copilot — no fixed signature to learn.
22 years of releases Continuous shipping since 2004. Built by Richscripts Inc, Toronto, Canada.
10× bytes-per-dollar 1 GB at $29 vs. ~100 MB at the comparable $19 tier elsewhere.
Live Preview

Pick a protection level. See exactly what the output looks like.

Same input, three different transforms. Maximum mode is what AI assistants can't pattern-match.

Input
function validateLicense(plan) {
  if (plan === 'trial') return false;
  return plan.expiresAt > Date.now();
}
Standard · rename only
function _0xa1(_p){
  if(_p==='trial')return!1;
  return _p.expiresAt>Date.now();
}
Input
function validateLicense(plan) {
  if (plan === 'trial') return false;
  return plan.expiresAt > Date.now();
}
Balanced · strings + dead code
var _s=['trial','expiresAt'];
function _0xa1(_p){
  if(_p===_s[0])return!1;
  return _p[_s[1]]>
   Date.now();
}
Input
function validateLicense(plan) {
  if (plan === 'trial') return false;
  return plan.expiresAt > Date.now();
}
Maximum · LLM-resistant
(function(){var _0xa3=_dec(0x4a);
function _0xa4(_p){var _st=0;
 while(_st!==-1){switch(_st){
  case 0:if(_p===_dec(0x4b))return!1;
   _st=1;break;
  case 1:return _p[_dec(0x4c)]>
   Date[_dec(0x4d)]();
}}}})();
  • Per-build polymorphic. Decoder shape, key derivation, identifier prefix all regenerate every release.
  • Encrypted constants. Strings and numbers exist only after runtime decode — LLMs lose their anchor points.
  • Flat control flow. if/else replaced by state-machine dispatch — structural cues gone.
2004 First shipped. Continuous releases for 22 years. Heritage
$29 = 1 GB Entry tier with anti-LLM Maximum mode included. Pricing
Per-build Polymorphic decoder regenerated every release. Anti-LLM
4 paths Online, desktop GUI, hosted API, npm CLI. Workflow
The 2026 Question Buyers Are Asking

Your old obfuscator may be one ChatGPT prompt away from useless.

AI assistants pattern-match against transform shapes they have already seen. The fixed-output obfuscators that were good enough in 2020 are now training data. Here's how Maximum mode is structurally different.

Static obfuscators are training data.

Open-source javascript-obfuscator output is in every modern LLM's training corpus. Models have learned the inverse transform and can apply it to any release that uses the same fixed shape.

Per-build polymorphism breaks the pattern.

Maximum mode emits a different decoder shape on every release. The LLM that "solved" version 2.0 sees a structurally different program in 2.1 and has to start over. It can't accumulate progress.

No anchor points to reason from.

Encrypted string pool, flat control flow, identifier prefix unique per build. The literals an LLM normally uses to recognize libraries, API calls, and intent are all gone — replaced by opaque indices into a runtime-decoded pool.

Find Your Fit

Pick your threat model. We'll point you at the right preset.

The right amount of protection depends on what an attacker actually wants from your code. Pick one and the recommendation appears below.

Recommended: Standard preset. Variable renaming and string encoding stop the casual copy-and-paste copycat. The Free tier is enough — no credit card required. Upgrade only if your usage grows beyond 200 MB/month.

Open Standard Preset ›

Recommended: Maximum preset. Per-build polymorphic decoder, encrypted constant pool, flat-transformed control flow, self-defending wrapper. The threshold an attacker has to cross becomes uneconomical for most non-state actors. Available in every paid tier from $29/month.

Open Maximum Preset ›

Recommended: Maximum preset + a runtime monitoring suite. Maximum mode raises the cost of analysis; runtime monitoring catches tampering when it happens. Server-side authority on anything that grants access. Obfuscation is one layer in a layered defense, never the only one.

Pricing

Published plans. Anti-LLM in every paid tier.

No sales call. No "contact us for pricing." Maximum-mode anti-LLM protection is included from the entry tier up — never gated to a higher plan.

Bytes per dollar at the entry tier

JavaScript Obfuscator
$29 → 1 GB / month
Comparable
$19 → 100 MB

10× the bytes per dollar. Anti-LLM Maximum mode included from $29.

Free

$0/month

A no-cost starting point for basic obfuscation workflows.

  • 200 MB monthly quota
  • 20 files per request
  • Name mangling
  • String encoding
Get Started Free

Basic

$29/month

Anti-LLM Maximum mode for individual developers shipping active products.

  • 1 GB monthly quota
  • 50 files per request
  • 10 MB max file size
  • Compression included
  • All Free features
Get Started ›
Most Popular

Corporate

$49/month

The balance of capacity and advanced controls most teams pick.

  • 3 GB monthly quota
  • 1000 files per request
  • 30 MB request size
  • Replace globals + protect members
  • Dead code insertion
Choose Corporate ›

Enterprise

$99/month

Higher limits for teams with larger release pipelines.

  • 9 GB monthly quota
  • 3000 files per request
  • 120 MB max file size
  • Best compression ratio
  • All Corporate features
Get Started ›

Compare all four plans in detail ›

The Question Every Procurement Reviewer Asks

"Can ChatGPT, Claude, or Copilot reverse-engineer this output?"

Short answer: no, for output that regenerates its decoder shape every build. Yes, for static obfuscators in their training data. The mechanism that makes the difference: per-build polymorphism removes the fixed signature LLMs rely on.

We wrote the long-form technical answer — what AI assistants can actually do today, where they break down structurally, and the specific transforms (encrypted constant pool, flat control flow, polymorphic decoder) that defeat their pattern-matching approach.

Read the full answer ›
Procurement-Ready

Get answers without a sales call.

Source-handling, release validation, compliance vocabulary, and workflow-to-policy fit are all documented. Your security team can review without scheduling a discovery meeting first — and that's the point.

  • In-memory processing

    Submitted JavaScript is processed in server memory only. Temporary upload artefacts removed after each request. Desktop workflow keeps source on the workstation throughout.

  • Compliance vocabulary mapped

    GDPR data minimisation, OWASP A04/A09, PCI DSS 4.0 client-side script integrity, HIPAA PHI-adjacent code, NIST SSDF release-protection — all addressed on one page.

  • Honest about what we don't have

    No SOC 2 or ISO 27001 today. We publish what we do have: source-handling behaviour, release validation, support channels, and the local workflow path for projects that need source to stay local.

Desktop App

Your source code never leaves your workstation.

For PHI-adjacent web apps, on-prem build pipelines, or any project where the JavaScript source code can't be uploaded to a third-party server — use the desktop GUI. Batch process entire projects, including JavaScript embedded in HTML, PHP, ASP, ASPX, and JSP. Generate a deterministic command line you can check into your release pipeline.

JavaScript Obfuscator Desktop GUI
Three Ways To Start

Pick the entry point that fits how your team actually ships.

Try the playground in 30 seconds for an experiment. Download the desktop app for batch protection on PHI-adjacent or on-prem code. Or read the long-form technical analysis if you're still deciding whether anti-LLM protection is what you actually need.