Fix restriding logic for structured kernels#53759
Fix restriding logic for structured kernels#53759ezyang wants to merge 7 commits intogh/ezyang/941/basefrom
Conversation
Fixes #53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 0ef8d99 (more details on the Dr. CI page):
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
| // Only restride if a resize occurred; otherwise we ignore the (advisory) | ||
| // strides from the meta function and directly use the output tensor's | ||
| // preexisting strides | ||
| if (resized) {{ |
There was a problem hiding this comment.
So, the behavior now is basically "Only use the input's memory format to determine strides when we have to resize the output and aren't given explicit strides", right? Posting a snippet below to confirm what I'm talking about.
input = torch.ones((2, 2, 2, 2), memory_format=torch.channels_last)
output = torch.ones((2, 2, 2, 2))
# output doesn't get resized, so we don't clobber the output's strides/memory_format.
torch._C._nn.upsample_nearest2d(input, (2,2), out=output).is_contiguous(memory_format=torch.channels_last)
# returns False
output = torch.ones(0)
# output gets resized, so we DO restride according to the input's memory format
torch._C._nn.upsample_nearest2d(input, (2,2), out=output).is_contiguous(memory_format=torch.channels_last)
# returns True
That all seems reasonable, since:
- I can imagine if a user explicitly specifies an output with the correct size, they probably don't want it to get re-strided?
- We also don't allow you to pass in an output tensor with non-zero size if it doesn't match the expected output size. (Well, the behavior is deprecated anyway). So really your only options are to pass in a zero-sized tensor, indicating that you want whatever properties the input tensor has, or you specify your own output tensor with the right size.
Just calling it out explicitly, since I think the behavior of the upsample kernels before porting them to structured was that they unconditionally copied the suggested memory format of the input (here).
There was a problem hiding this comment.
Expected out behavior is described here https://github.com/pytorch/pytorch/wiki/Developer-FAQ#how-does-out-work-in-pytorch, it is what you describe. Thanks for calling it out explicitly, and you are correct that previous behavior of upsample didn't conform to those rules.
| # TODO: this is not a sustainable way of testing meta functions, | ||
| # but I want some quick scaffolding first before a more | ||
| # integrated testing strategy | ||
| # TODO: this test should be triggered by test_nn.py but right |
There was a problem hiding this comment.
probably these tests belong in test_nn, but no need to change this PR.
Fixes #53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342) [ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342) [ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342) [ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342) [ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342) [ghstack-poisoned]
Summary: Pull Request resolved: pytorch#53759 Fixes pytorch#53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Reviewed By: albanD Differential Revision: D26971342 Pulled By: ezyang fbshipit-source-id: 805983fed2658e27fb033f36a71fd30950a29328
Summary: Pull Request resolved: pytorch#53759 Fixes pytorch#53587, see issue for in-depth explanation of the bug. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Reviewed By: albanD Differential Revision: D26971342 Pulled By: ezyang fbshipit-source-id: 805983fed2658e27fb033f36a71fd30950a29328
Stack from ghstack:
Fixes #53587, see issue for in-depth explanation of the bug.
Signed-off-by: Edward Z. Yang ezyang@fb.com
Differential Revision: D26971342