Fix casting logic for 0d CPU tensors in CUDA ops#11808
Fix casting logic for 0d CPU tensors in CUDA ops#11808colesbury wants to merge 3 commits intopytorch:masterfrom
Conversation
Previously, we didn't cast any 0-dim tensors used in CUDA operations. We can only avoid the casts for 0-dim CPU tensors used in CUDA operations. Fixes pytorch#11795
facebook-github-bot
left a comment
There was a problem hiding this comment.
colesbury has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| op.needs_cast = true; | ||
| } | ||
| op.needs_cast = needs_cast(*op.tensor, type); | ||
| if (op.needs_cast && op.tensor->dim() == 0) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot retest this please |
facebook-github-bot
left a comment
There was a problem hiding this comment.
colesbury has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| if (!tensor.defined() || dst_type == tensor.type()) { | ||
| return false; | ||
| } | ||
| if (dst_type.backend() == Backend::CUDA && |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| x = torch.tensor(1.5, device='cuda', dtype=torch.float16) | ||
| self.assertEqual(x * y, 4.5) | ||
| # half * int currently promotes to double |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
colesbury has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@pytorchbot retest this please |
Summary: Previously, we didn't cast any 0-dim tensors used in CUDA operations. We can only avoid the casts for 0-dim CPU tensors used in CUDA operations. Fixes #11795 Pull Request resolved: pytorch/pytorch#11808 Differential Revision: D9922406 Pulled By: colesbury fbshipit-source-id: 940b8a8534770aa5cd70d5d09b96be0f0f8146ff
Summary: Changes the result type of half type and any integer type to return half type (instead of float or double). This is based on top of #11808. The first new commit is "Make promoteType(half, integer) -> half". I'll rebase on top of master once that PR lands. Pull Request resolved: #11941 Differential Revision: D10014122 Pulled By: colesbury fbshipit-source-id: 16a5eb3406a5712069201d872d8736d0599e9411
Previously, we didn't cast any 0-dim tensors used in CUDA operations. We
can only avoid the casts for 0-dim CPU tensors used in CUDA operations.
Fixes #11795