Skip to content

[ROCm] add exact_dtype=False to bfloat16 test#38381

Closed
jeffdaily wants to merge 1 commit intopytorch:masterfrom
ROCm:fix_bf16_test
Closed

[ROCm] add exact_dtype=False to bfloat16 test#38381
jeffdaily wants to merge 1 commit intopytorch:masterfrom
ROCm:fix_bf16_test

Conversation

@jeffdaily
Copy link
Copy Markdown
Collaborator

CC @rohithkrn @ezyang @xw285cornell

Fixes

  • TestNNDeviceTypeCUDA.test_activations_bfloat16_cuda
  • TestNNDeviceTypeCUDA.test_pooling_bfloat16_cuda
  • TestNNDeviceTypeCUDA.test_softmax_bfloat16_cuda

@jeffdaily jeffdaily changed the title add exact_dtype=False to bfloat16 test [ROCm] add exact_dtype=False to bfloat16 test May 13, 2020
Copy link
Copy Markdown
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please condition these on being ROCm. And you should fix these bugs eventually!

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented May 13, 2020

Actually since this unbreaks CI I'm going to land it. Please do the follow up though.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang ezyang added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 13, 2020
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ezyang merged this pull request in 138769b.

@jeffdaily
Copy link
Copy Markdown
Collaborator Author

jeffdaily commented May 13, 2020

@ezyang After discussion with @rohithkrn, we need some clarification. We don’t know if it’s truly a bug, the test does computation in float32 and uses it as reference to compare against bfloat16 computation. How is this handled in other tests, if any? Also, these tests are marked @onlyCUDA @skipCUDAIfNotRocm, so in a way, they're already conditional on being ROCm only.

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented May 14, 2020

Oh, if the tests are ROCm only, then non-exact is fine. No follow up needed!

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
CC rohithkrn ezyang xw285cornell

Fixes
- TestNNDeviceTypeCUDA.test_activations_bfloat16_cuda
- TestNNDeviceTypeCUDA.test_pooling_bfloat16_cuda
- TestNNDeviceTypeCUDA.test_softmax_bfloat16_cuda
Pull Request resolved: pytorch#38381

Differential Revision: D21549636

Pulled By: ezyang

fbshipit-source-id: acb290c57eff4077b040a696267ecde613f0a433
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants