Skip to content

Find a replacement for pyperf #305

@jaraco

Description

@jaraco

In #294, I introduced pyperf, mainly for its ability to run timeit, record the results in a file, and then compare the results from another run. pyperf has some really nice features:

  • Performance benchmarks build on timeit, so anything one can run in an interpreter is fair game for evaluation.
  • Appends results to a file allowing for multiple independent tests to be run.
  • Ability to give each benchmark a meaningful name.
  • Reporting tool automatically excludes insignificant variance (highlights significant variance).

Unfortunately, it also has some drawbacks:

  • In Unreliable results for identical code psf/pyperf#106, I describe an issue where the measurements are jittery. I haven't had the time to investigate the issue, but given that the raw (minimum) timeit values were an effective measurement of peak performance, I'd like something that provide similar stability.
  • pyperf still requires orchestration (such as the two tox environments that need to be run in order). Ideally, one would be able to declare the tests in a list and some tooling would orchestrate the setup, execution, comparison, and reporting.
  • pyperf has no pytest integration. Ideally, the tests could be run through a pytest plugin and thus gain the benefits of selection or exclusion (-k perf or -k 'not perf') and other advantages of integration.

From the above, you can infer my wish list for a performance testing framework for this and other Python projects.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requesthelp wantedExtra attention is needed

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions