Skip to content

Allow larger margin of error for GPU perf test runtime#6044

Merged
ezyang merged 1 commit intopytorch:masterfrom
yf225:gpu_perf_test
Mar 27, 2018
Merged

Allow larger margin of error for GPU perf test runtime#6044
ezyang merged 1 commit intopytorch:masterfrom
yf225:gpu_perf_test

Conversation

@yf225
Copy link
Copy Markdown
Contributor

@yf225 yf225 commented Mar 27, 2018

The current margin of error for GPU perf test runtime is too small (only +-2%), which caused some of the tests to fail such as https://ci.pytorch.org/jenkins/job/pytorch-builds/job/short-perf-test-gpu/1492/console. This PR doubles the margin of error to +-4%.

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Mar 27, 2018

I'm a little confused what the long term plan is. Aren't we recording perf results, and planning to use that to track if regressions happen, instead of having a killswitch on the test itself?

@yf225
Copy link
Copy Markdown
Contributor Author

yf225 commented Mar 27, 2018

I think we might want to catch big regressions before the PR gets merged into master, and for small regressions added up due to margin of error, we will have a graph to track it and do something about it if it gets significantly out of hand? It's generally whether we should fix the regression before or after the merge, and I think the former might make more sense.

However I think currently it's hard to find out where the regression actually happens in a perf test. The per-operator perf test might help with this once it's launched.

@ezyang ezyang merged commit 34f2f48 into pytorch:master Mar 27, 2018
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants