Skip to content

Employ pyperf and separate environments to compare performance#294

Merged
jaraco merged 1 commit intomainfrom
feature/perf-diff
Mar 28, 2021
Merged

Employ pyperf and separate environments to compare performance#294
jaraco merged 1 commit intomainfrom
feature/perf-diff

Conversation

@jaraco
Copy link
Copy Markdown
Member

@jaraco jaraco commented Mar 28, 2021

Ref #292.

@jaraco
Copy link
Copy Markdown
Member Author

jaraco commented Mar 28, 2021

Annoyingly, even with no code changes, pyperf is reporting a statistically-significant performance difference:

perf-ref run-test: commands[5] | python -m pyperf compare_to --verbose main.json local.json --table
+-----------------------+---------+-----------------------+
| Benchmark             | main    | local                 |
+=======================+=========+=======================+
| entry_points()        | 7.13 ms | 23.6 ms: 3.30x slower |
+-----------------------+---------+-----------------------+
| uncached distribution | 806 us  | 1.33 ms: 1.65x slower |
+-----------------------+---------+-----------------------+
| Geometric mean        | (ref)   | 1.53x slower          |
+-----------------------+---------+-----------------------+

Benchmark hidden because not significant (2): discovery, cached distribution

I noticed the same thing on my local machine:

perf-ref run-test: commands[5] | python -m pyperf compare_to --verbose main.json local.json --table
+-----------------------+---------+-----------------------+
| Benchmark             | main    | local                 |
+=======================+=========+=======================+
| entry_points()        | 6.11 ms | 19.1 ms: 3.13x slower |
+-----------------------+---------+-----------------------+
| cached distribution   | 101 us  | 99.6 us: 1.02x faster |
+-----------------------+---------+-----------------------+
| uncached distribution | 891 us  | 1.29 ms: 1.45x slower |
+-----------------------+---------+-----------------------+
| Geometric mean        | (ref)   | 1.45x slower          |
+-----------------------+---------+-----------------------+

Benchmark hidden because not significant (1): discovery

And since the two environments are essentially identical, I'm really surprised.

Perhaps there's some way to tune the system.

As it is, this technique is barely better than the current system and is in some ways worse, because it's presenting untrustworthy summaries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant