Conversation
Greptile SummaryRefreshes benchmark data from aube 1.5.2 → 1.6.2: updates Confidence Score: 5/5Safe to merge — purely a data refresh with no logic changes and internally consistent numbers Both changed files are data/documentation only. All integer values in results.json match their stats.mean rounded to ms, and the README ratios correctly reflect the new JSON figures. No code logic was modified. No files require special attention beyond confirming the ci-warm aube regression (+108%) is expected for v1.6.2 Important Files Changed
Reviews (1): Last reviewed commit: "chore: refresh benchmarks" | Re-trigger Greptile |
Benchmark changesVersions:
Public ratios: warm installs vs Bun 6x -> 7x; warm installs vs pnpm 10x -> 11x.
da33e91 vs e7b8989 | aube/bun/pnpm | 3 scenarios | 3 runs | 500mbit/50ms | generated by Codex. |
🤖 Refreshed benchmarks
benchmarks/results.jsonwas pinned to aube1.5.2; the workspace is now1.6.2. Re-ranmise run bench:bumpon the hermetic Verdaccio registry (500mbit / 50ms per the mise task) and regeneratedbenchmarks/results.jsonplus the READMEBENCH_RATIOSblock. The benchmark matrix pins aube's GVS mode vianpm_config_enable_global_virtual_store=true|false(the auto-synthesized env alias for theenableGlobalVirtualStoresetting), so GitHub Actions' inheritedCI=trueenvironment does not change whether aube runs with GVS enabled or disabled.Benchmark changes
Versions:
Public ratios: warm installs vs Bun 6x -> 4x; warm installs vs pnpm 10x -> 9x; repeat test vs Bun 4x -> 5x; repeat test vs pnpm 35x -> 33x.
Review the numbers before merging — if anything looks wildly off vs. the previous release, investigate before landing. Hermetic proxy jitter or an npmjs uplink hiccup can occasionally skew results.
Once merged to main, the updated bench results flow into the next
release-plz-prrun automatically.Generated by the
bench-refreshworkflow.