Skip to content

HaoZeke/asv-perch

Repository files navigation

Table of Contents

  1. About
  2. Quick Start (Two-Way)
  3. Quick Start (Multi-Way)
  4. Quick Start (Full Pipeline – Single Job)
  5. Essential Inputs
  6. Why This Action
    1. Outputs
  7. Development
  8. License

About

CI Docs License: MIT

A GitHub Action that posts ASV benchmark comparison results as PR comments with Mann-Whitney U statistical significance testing, rich GFM formatting, and multi-way comparison support. Built on asv-spyglass.

The action can run benchmarks and post results in one step, or work as a pure presentation layer with pre-existing result files. Either way, it never manages your build environment – use conda, pixi, virtualenv, nix, Docker, GPU runners, or whatever you need.

Quick Start (Two-Way)

- uses: HaoZeke/asv-perch@v1
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    results-path: results/
    metadata-file: results/metadata.txt

Quick Start (Multi-Way)

- uses: HaoZeke/asv-perch@v1
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    results-path: results/
    comparison-mode: compare-many
    baseline-sha: ${{ env.BASELINE_SHA }}
    contender-shas: '${{ env.OPT_SHA }}, ${{ env.DEBUG_SHA }}'
    contender-labels: 'optimized, debug'

Quick Start (Full Pipeline – Single Job)

Run benchmarks and compare in one step. The action handles git checkout (preserve-paths), environment activation (run-prefix or setup), and the ASV invocation automatically.

- uses: prefix-dev/setup-pixi@v0.8.10
  with:
    activate-environment: true
- uses: HaoZeke/asv-perch@v1
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    results-path: .asv/results/
    init-command: pixi run bash -c "pip install asv && asv machine --yes"
    preserve-paths: benchmarks/, asv.conf.json
    benchmark-command: >-
      asv run -E "existing:$(which python)"
      --set-commit-hash {sha} --record-samples --quick
    baseline: |
      label: main
      sha: ${{ github.event.pull_request.base.sha }}
      setup: >-
        pixi run bash -c "meson setup bbdir
        --prefix=$CONDA_PREFIX --libdir=lib
        --buildtype release --wipe 2>/dev/null
        || meson setup bbdir --prefix=$CONDA_PREFIX
        --libdir=lib --buildtype release" &&
        pixi run meson install -C bbdir
      run-prefix: pixi run
    contenders: |
      - label: pr
        sha: ${{ github.event.pull_request.head.sha }}
        setup: >-
          pixi run bash -c "meson setup bbdir
          --prefix=$CONDA_PREFIX --libdir=lib
          --buildtype release --wipe 2>/dev/null
          || meson setup bbdir --prefix=$CONDA_PREFIX
          --libdir=lib --buildtype release" &&
          pixi run meson install -C bbdir
        run-prefix: pixi run
    label-before: main
    label-after: pr

For pure Python projects with run-prefix only (no build step):

- uses: HaoZeke/asv-perch@v1
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    results-path: .asv/results/
    baseline: |
      label: main
      sha: ${{ env.BASE_SHA }}
      run-prefix: pixi run -e bench
    contenders: |
      - label: pr
        sha: ${{ env.PR_SHA }}
        run-prefix: pixi run -e bench

Essential Inputs

Input Required Default Description
github-token yes ${{ github.token }} GitHub token for API access
results-path conditional -- Path to ASV results dir (not needed with comparison-text-file)
comparison-text-file no -- Pre-computed comparison output (skips asv-spyglass)
comparison-mode no compare compare (two-way) or compare-many (multi-way)
base-sha / pr-sha conditional -- SHAs for compare mode
base-file / pr-file conditional -- Direct file paths for compare mode
baseline-sha conditional -- SHA for compare-many baseline
contender-shas conditional -- Comma-separated SHAs for compare-many contenders
baseline-file conditional -- Direct path to baseline result JSON
contender-files conditional -- Comma-separated direct paths to contender JSONs
contender-labels no -- Comma-separated labels for contenders
baseline no -- YAML config for baseline (label, sha, run-prefix/setup)
contenders no -- YAML list of contenders (label, sha, run-prefix/setup)
benchmark-command no asv run --record-samples {sha}^! Shell command template; {sha} replaced in all fields
init-command no -- One-time setup before benchmarks (e.g. asv machine --yes)
preserve-paths no -- Paths to preserve across git checkouts (e.g. benchmarks/, asv.conf.json)
asv-spyglass-args no -- Extra flags for asv-spyglass CLI
regression-threshold no 10 Ratio for critical regression
auto-draft-on-regression no false Convert PR to draft on regression

See full documentation for all inputs, outputs, and configuration details.

Why This Action

  • Statistical rigor: Mann-Whitney U test + 99% confidence intervals via ASV, not naive ratio comparison
  • Environment freedom: The action never touches your build system. Run ASV in conda, pixi, nix, Docker, GPU runners – whatever you need
  • Multi-way comparison: Compare a baseline against multiple build configs or environments in a single table

See the full comparison with CodSpeed, benchmark-action, and inline scripts.

Outputs

Output Description
comparison Raw asv-spyglass comparison output
regression-detected 'true' or 'false'
comment-id ID of created/updated comment
pr-number Number of the associated PR

Development

Built with bun and TypeScript.

bun install
bun run build      # tsc + vite
bun run test       # vitest
bun run lint       # eslint
bun run typecheck  # tsc --noEmit

License

MIT

About

GitHub Action to post ASV benchmark comparison results as PR comments with statistical significance testing

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

 
 
 

Contributors