Conversation
Greptile SummaryThis PR refreshes Confidence Score: 5/5Safe to merge; only benchmark data and README copy are changed, with no logic or API surface affected All README ratios verify correctly against the updated JSON. The only notable finding is a high-variance ci-warm aube sample (P2), which the PR description already advises reviewers to inspect. No structural changes, no code path modifications. benchmarks/results.json — the ci-warm aube stddev warrants a second look before merging Important Files Changed
Reviews (2): Last reviewed commit: "chore: refresh benchmarks" | Re-trigger Greptile |
Benchmark changesPublic ratios: warm installs vs Bun 5x -> 5x; warm installs vs pnpm 6x -> 7x.
f705a32 vs 8a60d6d | aube/bun/pnpm | 3 scenarios | 3 runs | 500mbit/50ms | generated by Codex. |
🤖 Refreshed benchmarks
benchmarks/results.jsonwas pinned to aube1.5.2; the workspace is now1.5.2. Re-ranmise run bench:bumpon the hermetic Verdaccio registry (500mbit / 50ms per the mise task) and regeneratedbenchmarks/results.jsonplus the READMEBENCH_RATIOSblock. The benchmark matrix pins aube's GVS mode vianpm_config_enable_global_virtual_store=true|false(the auto-synthesized env alias for theenableGlobalVirtualStoresetting), so GitHub Actions' inheritedCI=trueenvironment does not change whether aube runs with GVS enabled or disabled.Benchmark changes
Public ratios: warm installs vs Bun 5x -> 6x; warm installs vs pnpm 6x -> 10x; repeat test vs Bun 3x -> 4x; repeat test vs pnpm 28x -> 35x.
Review the numbers before merging — if anything looks wildly off vs. the previous release, investigate before landing. Hermetic proxy jitter or an npmjs uplink hiccup can occasionally skew results.
Once merged to main, the updated bench results flow into the next
release-plz-prrun automatically.Generated by the
bench-refreshworkflow.