[ROCm] Enable MIOpen properly#13048
Closed
iotamudelta wants to merge 2 commits intopytorch:masterfrom
Closed
Conversation
* Disable MIOpen convolution on double tensors * MIOpen: set group count in convolution descriptor * MIOpen: Honor Max Dim (ROCm 222) * MIOpen: Batchnorm - Allow half/half and half/float, disallow double * Limit MIOpen batchnorm to same-precision * Fix maxdim check. (ROCm 246) * Fix reversed logic in DISABLE_MIOPEN (ROCm 253) * Export LANG/LC_ALL also for the test step. * Make tensors contiguous before calling MIOpen batch norm * Actually pass dilation to MIOpen. * Do not use MIOpen if there is dilation and the group size is > 1. - This is officially not supported currently. * Fixes for miopenforward bias call * Modified init conv descriptor param values and used same value for dilation * MIOpen: disable transposed convolutions
Contributor
Author
ezyang
reviewed
Oct 24, 2018
| #include "ATen/Config.h" | ||
|
|
||
| static const int MIOPEN_DIM_MAX = 4; | ||
| static const bool MIOPEN_ENABLED = getenv("DISABLE_MIOPEN") == NULL; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
requested changes
Oct 24, 2018
Contributor
ezyang
left a comment
There was a problem hiding this comment.
static initializer getenv is a big nono.
Contributor
Author
|
@ezyang removed debug option |
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
bddppq has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
ezyang
approved these changes
Oct 25, 2018
zdevito
pushed a commit
to zdevito/ATen
that referenced
this pull request
Oct 25, 2018
Summary: * Disable MIOpen convolution on double tensors * MIOpen: set group count in convolution descriptor * MIOpen: Honor Max Dim (ROCm 222) * MIOpen: Batchnorm - Allow half/half and half/float, disallow double * Limit MIOpen batchnorm to same-precision * Fix maxdim check. (ROCm 246) * Fix reversed logic in DISABLE_MIOPEN (ROCm 253) * Export LANG/LC_ALL also for the test step. * Make tensors contiguous before calling MIOpen batch norm * Actually pass dilation to MIOpen. * Do not use MIOpen if there is dilation and the group size is > 1. - This is officially not supported currently. * Fixes for miopenforward bias call * Modified init conv descriptor param values and used same value for dilation * MIOpen: disable transposed convolutions For attention: bddppq ezyang Pull Request resolved: pytorch/pytorch#13048 Differential Revision: D10785250 Pulled By: bddppq fbshipit-source-id: f9d9797de644652280d59308e5ea5cc07d177fd4
laurentdupin
pushed a commit
to laurentdupin/pytorch
that referenced
this pull request
Apr 24, 2026
Summary: * Disable MIOpen convolution on double tensors * MIOpen: set group count in convolution descriptor * MIOpen: Honor Max Dim (ROCm 222) * MIOpen: Batchnorm - Allow half/half and half/float, disallow double * Limit MIOpen batchnorm to same-precision * Fix maxdim check. (ROCm 246) * Fix reversed logic in DISABLE_MIOPEN (ROCm 253) * Export LANG/LC_ALL also for the test step. * Make tensors contiguous before calling MIOpen batch norm * Actually pass dilation to MIOpen. * Do not use MIOpen if there is dilation and the group size is > 1. - This is officially not supported currently. * Fixes for miopenforward bias call * Modified init conv descriptor param values and used same value for dilation * MIOpen: disable transposed convolutions For attention: bddppq ezyang Pull Request resolved: pytorch#13048 Differential Revision: D10785250 Pulled By: bddppq fbshipit-source-id: f9d9797de644652280d59308e5ea5cc07d177fd4
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
For attention: @bddppq @ezyang