Fix (non-reduction) ops over a dimension for n-dimensional empty tens…#9482
Fix (non-reduction) ops over a dimension for n-dimensional empty tens…#9482gchanan wants to merge 2 commits intopytorch:masterfrom
Conversation
…ors (CPU). This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid. This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared). Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| self.assertEqual(0, x.size(3)) | ||
| self.assertEqual(2, x.size(2)) | ||
| self.assertEqual(2, x.stride(0)) | ||
| self.assertEqual(1, x.stride(2)) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| # cross | ||
| y = torch.randn((0, 1, 3, 0), device=device) | ||
| self.assertEqual(y.shape, torch.cross(y, y).shape) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| self.assertEqual(y.shape, torch.cross(y, y).shape) | ||
|
|
||
| # renorm | ||
| self.assertEqual(shape, torch.renorm(x, 1, 0, 5).shape) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| self.assertEqual(shape, torch.renorm(x, 1, 2, 5).shape) | ||
|
|
||
| # sort | ||
| self.assertEqual([shape, shape], [z.shape for z in torch.sort(x, dim=0)]) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| auto sourceSlicedSizes = std::vector<int64_t>(source.sizes()); | ||
| if (sourceSlicedSizes.size() > 0) { | ||
| sourceSlicedSizes.erase(sourceSlicedSizes.begin()); | ||
| sourceSlicedSizes.erase(sourceSlicedSizes.begin() + dim); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| for (int64_t i = 0; i < total_dims; i++) { | ||
| tmp = tmp / shape[i]; | ||
| stride_contiguous_d[i] = tmp; | ||
| for (int64_t i = total_dims - 1; i >=0; i--) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
left a comment
There was a problem hiding this comment.
I didn't do a close enough review to say that I am confident that I caught bugs, but I think this is good enough to go in for now.
…… (#9482) Summary: …ors (CPU). This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid. This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared). Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0. Pull Request resolved: pytorch/pytorch#9482 Reviewed By: ezyang Differential Revision: D8873047 Pulled By: gchanan fbshipit-source-id: 86523afd3d50277834f654cd559dfbc7875cdffe
pytorch#9482) Summary: …ors (CPU). This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid. This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared). Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0. Pull Request resolved: pytorch#9482 Reviewed By: ezyang Differential Revision: D8873047 Pulled By: gchanan fbshipit-source-id: 86523afd3d50277834f654cd559dfbc7875cdffe
pytorch#9482) Summary: …ors (CPU). This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid. This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared). Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0. Pull Request resolved: pytorch#9482 Reviewed By: ezyang Differential Revision: D8873047 Pulled By: gchanan fbshipit-source-id: 86523afd3d50277834f654cd559dfbc7875cdffe
…ors (CPU).
This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid.
This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared).
Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.