Skip to content

Support n-dimensional empty tensors in CUDA non-reduction dimension f…#9658

Closed
gchanan wants to merge 1 commit intopytorch:masterfrom
gchanan:empty_dim_cuda2
Closed

Support n-dimensional empty tensors in CUDA non-reduction dimension f…#9658
gchanan wants to merge 1 commit intopytorch:masterfrom
gchanan:empty_dim_cuda2

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Jul 20, 2018

…unctions.

This also unifies the error checkign between scatter/scatterAdd on CUDA.

…unctions.

This also unifies the error checkign between scatter/scatterAdd on CUDA.
@gchanan
Copy link
Contributor Author

gchanan commented Jul 20, 2018

I wasn't consistent about putting things in if blocks vs returning early; I generally prefer if blocks (even if causes history churn) because it's less likely to have bugs going forward.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@soumith
Copy link
Collaborator

soumith commented Jul 21, 2018

@pytorchbot retest this please

@gchanan
Copy link
Contributor Author

gchanan commented Jul 22, 2018

@pytorchbot retest this please.

ptrdiff_t numSlices = inElements / sliceSize;
if (inElements > 0) {
int64_t sliceSize = THCudaLongTensor_size(state, t, dim);
ptrdiff_t numSlices = inElements == 0 ? 0 : inElements / sliceSize;

This comment was marked as off-topic.

This comment was marked as off-topic.

"Input tensor must have same size as output tensor apart from the specified dimension");
}
int64_t indexSizeD = THCudaLongTensor_size(state, index, d);
if (d != dim) {

This comment was marked as off-topic.

}
THArgCheck(indexSizeD <= THCTensor_(size)(state, src, d), 3,
"Index tensor must not have larger size than input tensor, but got index %s input %s",
THCudaLongTensor_sizeDesc(state, index).str, THCTensor_(sizeDesc)(state, src).str);

This comment was marked as off-topic.


if (THCTensor_(_nDimension)(state, key) == 0) {
if (inElements == 0) {
// Zero-dim tensor; do nothing

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good after nits.

@gchanan
Copy link
Contributor Author

gchanan commented Jul 23, 2018

nits will be fixed in an upcoming PR.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 23, 2018
…… (#9658)

Summary:
…unctions.

This also unifies the error checkign between scatter/scatterAdd on CUDA.
Pull Request resolved: pytorch/pytorch#9658

Differential Revision: D8941527

Pulled By: gchanan

fbshipit-source-id: 750bbac568f607985088211887c4167b67be11ea
gchanan added a commit to gchanan/pytorch that referenced this pull request Jul 23, 2018
facebook-github-bot pushed a commit that referenced this pull request Jul 24, 2018
#9722)

Summary:
…m ops.

Continuation of #9658.
Pull Request resolved: #9722

Differential Revision: D8956321

Pulled By: gchanan

fbshipit-source-id: 116fcaa1be5b1373f03217911556a28125cc860d
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 24, 2018
…… (#9722)

Summary:
…m ops.

Continuation of pytorch/pytorch#9658.
Pull Request resolved: pytorch/pytorch#9722

Differential Revision: D8956321

Pulled By: gchanan

fbshipit-source-id: 116fcaa1be5b1373f03217911556a28125cc860d
jramseyer pushed a commit to jramseyer/pytorch that referenced this pull request Jul 30, 2018
pytorch#9658)

Summary:
…unctions.

This also unifies the error checkign between scatter/scatterAdd on CUDA.
Pull Request resolved: pytorch#9658

Differential Revision: D8941527

Pulled By: gchanan

fbshipit-source-id: 750bbac568f607985088211887c4167b67be11ea
jramseyer pushed a commit to jramseyer/pytorch that referenced this pull request Jul 30, 2018
pytorch#9722)

Summary:
…m ops.

Continuation of pytorch#9658.
Pull Request resolved: pytorch#9722

Differential Revision: D8956321

Pulled By: gchanan

fbshipit-source-id: 116fcaa1be5b1373f03217911556a28125cc860d
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
pytorch#9658)

Summary:
…unctions.

This also unifies the error checkign between scatter/scatterAdd on CUDA.
Pull Request resolved: pytorch#9658

Differential Revision: D8941527

Pulled By: gchanan

fbshipit-source-id: 750bbac568f607985088211887c4167b67be11ea
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
pytorch#9722)

Summary:
…m ops.

Continuation of pytorch#9658.
Pull Request resolved: pytorch#9722

Differential Revision: D8956321

Pulled By: gchanan

fbshipit-source-id: 116fcaa1be5b1373f03217911556a28125cc860d
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants