Support n-dimensional empty tensors in CUDA non-reduction dimension f…#9658
Support n-dimensional empty tensors in CUDA non-reduction dimension f…#9658gchanan wants to merge 1 commit intopytorch:masterfrom
Conversation
…unctions. This also unifies the error checkign between scatter/scatterAdd on CUDA.
|
I wasn't consistent about putting things in if blocks vs returning early; I generally prefer if blocks (even if causes history churn) because it's less likely to have bugs going forward. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@pytorchbot retest this please |
|
@pytorchbot retest this please. |
| ptrdiff_t numSlices = inElements / sliceSize; | ||
| if (inElements > 0) { | ||
| int64_t sliceSize = THCudaLongTensor_size(state, t, dim); | ||
| ptrdiff_t numSlices = inElements == 0 ? 0 : inElements / sliceSize; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| "Input tensor must have same size as output tensor apart from the specified dimension"); | ||
| } | ||
| int64_t indexSizeD = THCudaLongTensor_size(state, index, d); | ||
| if (d != dim) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
| THArgCheck(indexSizeD <= THCTensor_(size)(state, src, d), 3, | ||
| "Index tensor must not have larger size than input tensor, but got index %s input %s", | ||
| THCudaLongTensor_sizeDesc(state, index).str, THCTensor_(sizeDesc)(state, src).str); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| if (THCTensor_(_nDimension)(state, key) == 0) { | ||
| if (inElements == 0) { | ||
| // Zero-dim tensor; do nothing |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
nits will be fixed in an upcoming PR. |
…… (#9658) Summary: …unctions. This also unifies the error checkign between scatter/scatterAdd on CUDA. Pull Request resolved: pytorch/pytorch#9658 Differential Revision: D8941527 Pulled By: gchanan fbshipit-source-id: 750bbac568f607985088211887c4167b67be11ea
…m ops. Continuation of pytorch#9658.
…… (#9722) Summary: …m ops. Continuation of pytorch/pytorch#9658. Pull Request resolved: pytorch/pytorch#9722 Differential Revision: D8956321 Pulled By: gchanan fbshipit-source-id: 116fcaa1be5b1373f03217911556a28125cc860d
pytorch#9658) Summary: …unctions. This also unifies the error checkign between scatter/scatterAdd on CUDA. Pull Request resolved: pytorch#9658 Differential Revision: D8941527 Pulled By: gchanan fbshipit-source-id: 750bbac568f607985088211887c4167b67be11ea
pytorch#9722) Summary: …m ops. Continuation of pytorch#9658. Pull Request resolved: pytorch#9722 Differential Revision: D8956321 Pulled By: gchanan fbshipit-source-id: 116fcaa1be5b1373f03217911556a28125cc860d
pytorch#9658) Summary: …unctions. This also unifies the error checkign between scatter/scatterAdd on CUDA. Pull Request resolved: pytorch#9658 Differential Revision: D8941527 Pulled By: gchanan fbshipit-source-id: 750bbac568f607985088211887c4167b67be11ea
pytorch#9722) Summary: …m ops. Continuation of pytorch#9658. Pull Request resolved: pytorch#9722 Differential Revision: D8956321 Pulled By: gchanan fbshipit-source-id: 116fcaa1be5b1373f03217911556a28125cc860d
…unctions.
This also unifies the error checkign between scatter/scatterAdd on CUDA.