Creates device generic cuDNN decorators#26791
Closed
Conversation
added 6 commits
September 26, 2019 17:51
facebook-github-bot
pushed a commit
that referenced
this pull request
Sep 30, 2019
Summary: Make cudnn rnn respect current stream. After this lands, non-default test stream can be reenabled in #26791 Pull Request resolved: #27026 Test Plan: default stream functionality is tested in existing tests, stream safety tests will be added in #26791 Differential Revision: D17656967 Pulled By: ngimel fbshipit-source-id: 8b051aedd1df089b21f666ec553a5acefffdac88
zdevito
pushed a commit
to zdevito/ATen
that referenced
this pull request
Sep 30, 2019
Summary: Make cudnn rnn respect current stream. After this lands, non-default test stream can be reenabled in pytorch/pytorch#26791 Pull Request resolved: pytorch/pytorch#27026 Test Plan: default stream functionality is tested in existing tests, stream safety tests will be added in pytorch/pytorch#26791 Differential Revision: D17656967 Pulled By: ngimel fbshipit-source-id: 8b051aedd1df089b21f666ec553a5acefffdac88
ngimel
approved these changes
Sep 30, 2019
Collaborator
Author
|
@pytorchbot rebase this please. |
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
96d3781 to
b52e32c
Compare
Collaborator
Author
|
@pytorchbot rebase this please |
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Contributor
pdlive215
pushed a commit
to pdlive215/pytorch
that referenced
this pull request
Nov 27, 2019
Summary: Make cudnn rnn respect current stream. After this lands, non-default test stream can be reenabled in pytorch#26791 Pull Request resolved: pytorch#27026 Test Plan: default stream functionality is tested in existing tests, stream safety tests will be added in pytorch#26791 Differential Revision: D17656967 Pulled By: ngimel fbshipit-source-id: 8b051aedd1df089b21f666ec553a5acefffdac88
pdlive215
pushed a commit
to pdlive215/pytorch
that referenced
this pull request
Nov 27, 2019
Summary: - Creates skipCUDAIfNoCudnn, skipCUDAIfCudnnVersionLessThan decorators - Makes several test_nn.py tests generic Many tests in test_nn.py test cuDNN. These tests are guarded on various conditionals using TEST_CUDNN and TEST_CUDNN_VERSION imported from common_cuda.py and custom error messages like 'CUDNN not available' and 'needs cudnn.' This PR suggests using the CUDA base test class instead of common_cuda.py to test cuDNN's availability, at least on generic tests. The CUDA base test class is preferable to common_cuda.py since it only creates a CUDA context if its tests are run. Importing from common_cuda.py, on the other hand, always creates a CUDA context. Using the CUDA base test class is also consistent with how other generic tests are guarded and provides consistent skip messages. One quirk to this approach is that it makes use of the self argument to the test functions to check for cuDNN availability during a test. See test_rnn_retain_variables. The self argument could also be used to check the device type instead of the more verbose torch.device(device).type == 'cuda'. An alternative approach to making test_nn.py generic would be to continue to use common_cuda.py imports, try to keep their skip messages consistent, and not worry about creating unnecessary CUDA contexts. This would preclude writing generic tests that can only run on CUDA if cuDNN is available, however, so tests like "_test_RNN_cpu_vs_cudnn" would require additional changes to make into device generic precision tests like "_test_RNN_cpu_vs_xla." For consistency, simplicity, and ease of use, I recommend we adopt the proposed decorators and make use of the self argument when productive. Pull Request resolved: pytorch#26791 Differential Revision: D17678325 Pulled By: mruberry fbshipit-source-id: 1794735ede9bc9f36856e72b3804b136ad3e0de2
thiagocrepaldi
pushed a commit
to thiagocrepaldi/pytorch
that referenced
this pull request
Feb 4, 2020
Summary: - Creates skipCUDAIfNoCudnn, skipCUDAIfCudnnVersionLessThan decorators - Makes several test_nn.py tests generic Many tests in test_nn.py test cuDNN. These tests are guarded on various conditionals using TEST_CUDNN and TEST_CUDNN_VERSION imported from common_cuda.py and custom error messages like 'CUDNN not available' and 'needs cudnn.' This PR suggests using the CUDA base test class instead of common_cuda.py to test cuDNN's availability, at least on generic tests. The CUDA base test class is preferable to common_cuda.py since it only creates a CUDA context if its tests are run. Importing from common_cuda.py, on the other hand, always creates a CUDA context. Using the CUDA base test class is also consistent with how other generic tests are guarded and provides consistent skip messages. One quirk to this approach is that it makes use of the self argument to the test functions to check for cuDNN availability during a test. See test_rnn_retain_variables. The self argument could also be used to check the device type instead of the more verbose torch.device(device).type == 'cuda'. An alternative approach to making test_nn.py generic would be to continue to use common_cuda.py imports, try to keep their skip messages consistent, and not worry about creating unnecessary CUDA contexts. This would preclude writing generic tests that can only run on CUDA if cuDNN is available, however, so tests like "_test_RNN_cpu_vs_cudnn" would require additional changes to make into device generic precision tests like "_test_RNN_cpu_vs_xla." For consistency, simplicity, and ease of use, I recommend we adopt the proposed decorators and make use of the self argument when productive. Pull Request resolved: pytorch#26791 Differential Revision: D17678325 Pulled By: mruberry fbshipit-source-id: 1794735ede9bc9f36856e72b3804b136ad3e0de2
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Many tests in test_nn.py test cuDNN. These tests are guarded on various conditionals using TEST_CUDNN and TEST_CUDNN_VERSION imported from common_cuda.py and custom error messages like 'CUDNN not available' and 'needs cudnn.'
This PR suggests using the CUDA base test class instead of common_cuda.py to test cuDNN's availability, at least on generic tests. The CUDA base test class is preferable to common_cuda.py since it only creates a CUDA context if its tests are run. Importing from common_cuda.py, on the other hand, always creates a CUDA context. Using the CUDA base test class is also consistent with how other generic tests are guarded and provides consistent skip messages.
One quirk to this approach is that it makes use of the self argument to the test functions to check for cuDNN availability during a test. See test_rnn_retain_variables. The self argument could also be used to check the device type instead of the more verbose torch.device(device).type == 'cuda'.
An alternative approach to making test_nn.py generic would be to continue to use common_cuda.py imports, try to keep their skip messages consistent, and not worry about creating unnecessary CUDA contexts. This would preclude writing generic tests that can only run on CUDA if cuDNN is available, however, so tests like "_test_RNN_cpu_vs_cudnn" would require additional changes to make into device generic precision tests like "_test_RNN_cpu_vs_xla."
For consistency, simplicity, and ease of use, I recommend we adopt the proposed decorators and make use of the self argument when productive.