Skip to content

Fix (non-reduction) ops over a dimension for n-dimensional empty tens…#9482

Closed
gchanan wants to merge 2 commits intopytorch:masterfrom
gchanan:empty_ndim_shape4
Closed

Fix (non-reduction) ops over a dimension for n-dimensional empty tens…#9482
gchanan wants to merge 2 commits intopytorch:masterfrom
gchanan:empty_ndim_shape4

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Jul 17, 2018

…ors (CPU).

This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid.
This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared).
Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.

…ors (CPU).

This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid.
This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared).
Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

self.assertEqual(0, x.size(3))
self.assertEqual(2, x.size(2))
self.assertEqual(2, x.stride(0))
self.assertEqual(1, x.stride(2))

This comment was marked as off-topic.

This comment was marked as off-topic.


# cross
y = torch.randn((0, 1, 3, 0), device=device)
self.assertEqual(y.shape, torch.cross(y, y).shape)

This comment was marked as off-topic.

This comment was marked as off-topic.

self.assertEqual(y.shape, torch.cross(y, y).shape)

# renorm
self.assertEqual(shape, torch.renorm(x, 1, 0, 5).shape)

This comment was marked as off-topic.

This comment was marked as off-topic.

self.assertEqual(shape, torch.renorm(x, 1, 2, 5).shape)

# sort
self.assertEqual([shape, shape], [z.shape for z in torch.sort(x, dim=0)])

This comment was marked as off-topic.

auto sourceSlicedSizes = std::vector<int64_t>(source.sizes());
if (sourceSlicedSizes.size() > 0) {
sourceSlicedSizes.erase(sourceSlicedSizes.begin());
sourceSlicedSizes.erase(sourceSlicedSizes.begin() + dim);

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

for (int64_t i = 0; i < total_dims; i++) {
tmp = tmp / shape[i];
stride_contiguous_d[i] = tmp;
for (int64_t i = total_dims - 1; i >=0; i--) {

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't do a close enough review to say that I am confident that I caught bugs, but I think this is good enough to go in for now.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 17, 2018
…… (#9482)

Summary:
…ors (CPU).

This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid.
This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared).
Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.
Pull Request resolved: pytorch/pytorch#9482

Reviewed By: ezyang

Differential Revision: D8873047

Pulled By: gchanan

fbshipit-source-id: 86523afd3d50277834f654cd559dfbc7875cdffe
jramseyer pushed a commit to jramseyer/pytorch that referenced this pull request Jul 30, 2018
pytorch#9482)

Summary:
…ors (CPU).

This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid.
This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared).
Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.
Pull Request resolved: pytorch#9482

Reviewed By: ezyang

Differential Revision: D8873047

Pulled By: gchanan

fbshipit-source-id: 86523afd3d50277834f654cd559dfbc7875cdffe
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
pytorch#9482)

Summary:
…ors (CPU).

This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid.
This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared).
Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.
Pull Request resolved: pytorch#9482

Reviewed By: ezyang

Differential Revision: D8873047

Pulled By: gchanan

fbshipit-source-id: 86523afd3d50277834f654cd559dfbc7875cdffe
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants