Benchmarks
The numbers below are measured end-to-end against the same medium-sized real-world fixture — ~1400 real-world packages — using hyperfine to time each scenario under identical conditions.
Methodology
Every scenario assumes a committed lockfile is present. The main axis is how warm the tool's cache/store is before the command runs; the CI rows also disable aube's global virtual store for CI parity. Warm clears node_modules but keeps each tool's store/cache populated; cold wipes the store and cache too. The fixture, scripts, and raw hyperfine output live at benchmarks/. Reproduce locally with mise run bench.
The chart highlights the warm-cache, no-node_modules install case. The separate CI rows disable aube's global virtual store to match CI defaults.
Last updated: 2026-05-07 · Source: benchmarks/results.json, regenerated by mise run bench:bump.
Why it's faster
aube uses the same on-disk model as pnpm — a global content-addressable store plus an isolated symlink layout — but the install pipeline is written in a faster, natively-threaded language instead of JavaScript. Same layout, quicker engine.
Scenarios
- Fresh install (warm cache) — frozen lockfile,
node_moduleswiped, store and packument cache warm. - Fresh install (cold cache) — frozen lockfile but the store, packument cache, and
node_modulesare all wiped. Measures registry fetch, store import, and link/materialization work from scratch. - CI install (warm cache, GVS disabled) — frozen lockfile,
node_moduleswiped, store and packument cache warm. Disables aube's global virtual store to match CI defaults. - CI install (cold cache, GVS disabled) — frozen lockfile but the store, packument cache, and
node_modulesare all wiped. Worst-case CI: every tarball is downloaded, extracted, and hashed into the CAS without aube's global virtual store path. - npm install && npm run test (already installed) — models the developer loop after dependencies are already installed. Each timed run repeats the tool's normal "install if needed, then run tests" command. aube can skip the install work when its install-state file is fresh; other tools still revalidate their lockfile or install state before dispatching the script. The fixture's
testscript is a no-opnode -e, so this scenario mostly measures install short-circuiting and script dispatch. - Add dependency — lockfile and store both warm, then
<pm> add is-oddto exercise the incremental resolve path.
Reproducing
# One-shot: runs the package managers through the full scenario matrix,
# writes results to benchmarks/results.json, and emits the markdown
# summary to stdout.
mise run bench:bumpThe task calls benchmarks/bench.sh through mise x, installs the benchmark toolchain declared on the task, sets RUNS=10 WARMUP=3, and points RESULTS_JSON at benchmarks/results.json so the file is rewritten in-place. Commit the updated JSON to refresh the chart above — the VitePress data loader at docs/benchmarks.data.ts reads it at build time, so you don't need to edit benchmarks.md or hand-write any chart data.
Benchmark runs use a hermetic local registry with the link shaped to a fixed 500mbit / 50ms baseline — a "fast home broadband" profile — so two runs on different ISPs or CI runners produce comparable numbers:
flock /tmp/aube-bench.lock mise run bench