-
-
Notifications
You must be signed in to change notification settings - Fork 93
Closed
Labels
enhancementNew feature or requestNew feature or requesthelp wantedExtra attention is neededExtra attention is needed
Description
In #294, I introduced pyperf, mainly for its ability to run timeit, record the results in a file, and then compare the results from another run. pyperf has some really nice features:
- Performance benchmarks build on
timeit, so anything one can run in an interpreter is fair game for evaluation. - Appends results to a file allowing for multiple independent tests to be run.
- Ability to give each benchmark a meaningful name.
- Reporting tool automatically excludes insignificant variance (highlights significant variance).
Unfortunately, it also has some drawbacks:
- In Unreliable results for identical code psf/pyperf#106, I describe an issue where the measurements are jittery. I haven't had the time to investigate the issue, but given that the raw (minimum) timeit values were an effective measurement of peak performance, I'd like something that provide similar stability.
- pyperf still requires orchestration (such as the two tox environments that need to be run in order). Ideally, one would be able to declare the tests in a list and some tooling would orchestrate the setup, execution, comparison, and reporting.
- pyperf has no pytest integration. Ideally, the tests could be run through a pytest plugin and thus gain the benefits of selection or exclusion (
-k perfor-k 'not perf') and other advantages of integration.
From the above, you can infer my wish list for a performance testing framework for this and other Python projects.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requesthelp wantedExtra attention is neededExtra attention is needed