Skip to content

Fix restriding logic for structured kernels#53759

Closed
ezyang wants to merge 7 commits intogh/ezyang/941/basefrom
gh/ezyang/941/head
Closed

Fix restriding logic for structured kernels#53759
ezyang wants to merge 7 commits intogh/ezyang/941/basefrom
gh/ezyang/941/head

Conversation

@ezyang
Copy link
Copy Markdown
Contributor

@ezyang ezyang commented Mar 10, 2021

Stack from ghstack:

Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang ezyang@fb.com

Differential Revision: D26971342

Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Mar 10, 2021

💊 CI failures summary and remediations

As of commit 0ef8d99 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-scanned failure(s)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

ezyang added a commit that referenced this pull request Mar 10, 2021
Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: 7d5c873
Pull Request resolved: #53759
@ezyang ezyang requested review from bdhirsh, bhosmer and ngimel March 10, 2021 22:31
// Only restride if a resize occurred; otherwise we ignore the (advisory)
// strides from the meta function and directly use the output tensor's
// preexisting strides
if (resized) {{
Copy link
Copy Markdown
Collaborator

@bdhirsh bdhirsh Mar 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, the behavior now is basically "Only use the input's memory format to determine strides when we have to resize the output and aren't given explicit strides", right? Posting a snippet below to confirm what I'm talking about.

input = torch.ones((2, 2, 2, 2), memory_format=torch.channels_last)
output = torch.ones((2, 2, 2, 2))
# output doesn't get resized, so we don't clobber the output's strides/memory_format.
torch._C._nn.upsample_nearest2d(input, (2,2), out=output).is_contiguous(memory_format=torch.channels_last)
# returns False

output = torch.ones(0)
# output gets resized, so we DO restride according to the input's memory format
torch._C._nn.upsample_nearest2d(input, (2,2), out=output).is_contiguous(memory_format=torch.channels_last)
# returns True

That all seems reasonable, since:

  • I can imagine if a user explicitly specifies an output with the correct size, they probably don't want it to get re-strided?
  • We also don't allow you to pass in an output tensor with non-zero size if it doesn't match the expected output size. (Well, the behavior is deprecated anyway). So really your only options are to pass in a zero-sized tensor, indicating that you want whatever properties the input tensor has, or you specify your own output tensor with the right size.

Just calling it out explicitly, since I think the behavior of the upsample kernels before porting them to structured was that they unconditionally copied the suggested memory format of the input (here).

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expected out behavior is described here https://github.com/pytorch/pytorch/wiki/Developer-FAQ#how-does-out-work-in-pytorch, it is what you describe. Thanks for calling it out explicitly, and you are correct that previous behavior of upsample didn't conform to those rules.

Comment thread test/test_torch.py
# TODO: this is not a sustainable way of testing meta functions,
# but I want some quick scaffolding first before a more
# integrated testing strategy
# TODO: this test should be triggered by test_nn.py but right
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably these tests belong in test_nn, but no need to change this PR.

Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Mar 11, 2021
Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: e43be99
Pull Request resolved: #53759
Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342)

[ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342)

[ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342)

[ghstack-poisoned]
ezyang added 2 commits March 14, 2021 08:30
Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342)

[ghstack-poisoned]
Fixes #53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D26971342](https://our.internmc.facebook.com/intern/diff/D26971342)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ezyang merged this pull request in 547f435.

@facebook-github-bot facebook-github-bot deleted the gh/ezyang/941/head branch March 18, 2021 14:16
xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
Summary:
Pull Request resolved: pytorch#53759

Fixes pytorch#53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D26971342

Pulled By: ezyang

fbshipit-source-id: 805983fed2658e27fb033f36a71fd30950a29328
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#53759

Fixes pytorch#53587, see issue for in-depth explanation of the bug.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D26971342

Pulled By: ezyang

fbshipit-source-id: 805983fed2658e27fb033f36a71fd30950a29328
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants