Skip to content

update upsample tests in test_nn.py to test for memory_format#53665

Closed
bdhirsh wants to merge 3 commits intogh/bdhirsh/87/basefrom
gh/bdhirsh/87/head
Closed

update upsample tests in test_nn.py to test for memory_format#53665
bdhirsh wants to merge 3 commits intogh/bdhirsh/87/basefrom
gh/bdhirsh/87/head

Conversation

@bdhirsh
Copy link
Copy Markdown
Collaborator

@bdhirsh bdhirsh commented Mar 9, 2021

@ngimel pointed out to me where we already test the behavior of the Upsample ops in test_nn.py. This PR deleting my bespoke tests in test_torch.py and updates those in test_nn.py to test memory format properly.

There were two reasons the original test didn't pick up on a memory format regression:

  • They didn't test the memory format of the output tensor explicitly, i.e. output.is_contiguous(memory_format=...)
  • Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the channels_last memory format to actually re-order the strides.

Stack from ghstack:

Differential Revision: D26929683

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Mar 9, 2021

💊 CI failures summary and remediations

As of commit 981e490 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 2/2 non-scanned failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Comment thread test/test_nn.py
with warnings.catch_warnings(record=True) as w:
out_t = m(in_t)
self.assertEqual(torch.ones(1, 1, out_size, out_size), out_t.data)
for memory_format in [torch.contiguous_format, torch.channels_last]:
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GitHub displays this with a lot of red and green, but all I really did was add a for loop here to loop over the different memory formats. Then further down I assert that the output is contiguous w.r.t the expected memory format.

…mat"

@ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly.

There were two reasons the original test didn't pick up on a memory format regression:
- They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)`
- Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides.


Differential Revision: [D26929683](https://our.internmc.facebook.com/intern/diff/D26929683)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Mar 9, 2021
@bdhirsh bdhirsh requested a review from ngimel March 9, 2021 23:45
Copy link
Copy Markdown
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Please get rid of .data attributes, otherwise looks good.

Comment thread test/test_nn.py Outdated
out_uint8_t = m(in_uint8_t)
self.assertEqual(torch.ones(1, 1, 4, 4).contiguous(memory_format=memory_format), out_t.data)
self.assertEqual(torch.ones(1, 1, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data)
self.assertEqual(torch.ones(1, 2, 4, 4).contiguous(memory_format=memory_format), out_t.data)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: .data is deprecated API, and you don't need to call .contiguous() on the expected tensor, so

self.assertEqual(torch.ones(...), out_t)

Comment thread test/test_nn.py Outdated
self.assertEqual(torch.ones(1, 1, 4, 4).contiguous(memory_format=memory_format), out_t.data)
self.assertEqual(torch.ones(1, 1, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data)
self.assertEqual(torch.ones(1, 2, 4, 4).contiguous(memory_format=memory_format), out_t.data)
self.assertEqual(torch.ones(1, 2, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment here

Comment thread test/test_nn.py
self.assertEqual(torch.ones(1, 2, 4, 4).contiguous(memory_format=memory_format), out_t.data)
self.assertEqual(torch.ones(1, 2, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data)
# Assert that memory format is carried through to the output
self.assertTrue(out_t.is_contiguous(memory_format=memory_format))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's good!

Comment thread test/test_nn.py Outdated
with warnings.catch_warnings(record=True) as w:
out_t = m(in_t)
self.assertEqual(torch.ones(1, 1, 4, 2).contiguous(memory_format=memory_format), out_t.data)
self.assertEqual(torch.ones(1, 2, 4, 2).contiguous(memory_format=memory_format), out_t.data)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably makes sense to test memory_format here too?

…mat"

@ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly.

There were two reasons the original test didn't pick up on a memory format regression:
- They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)`
- Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides.


Differential Revision: [D26929683](https://our.internmc.facebook.com/intern/diff/D26929683)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Mar 10, 2021
@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Mar 10, 2021

Thanks a lot for following up on this!

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 10, 2021

Codecov Report

Merging #53665 (981e490) into gh/bdhirsh/87/base (370bcc8) will increase coverage by 0.00%.
The diff coverage is n/a.

@@                 Coverage Diff                 @@
##           gh/bdhirsh/87/base   #53665   +/-   ##
===================================================
  Coverage               77.64%   77.65%           
===================================================
  Files                    1869     1869           
  Lines                  182291   182291           
===================================================
+ Hits                   141548   141549    +1     
+ Misses                  40743    40742    -1     

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@bdhirsh merged this pull request in c68cc24.

@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/87/head branch March 14, 2021 14:14
xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
…h#53665)

Summary:
Pull Request resolved: pytorch#53665

ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly.

There were two reasons the original test didn't pick up on a memory format regression:
- They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)`
- Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D26929683

Pulled By: bdhirsh

fbshipit-source-id: d17bc660ff031e9b3e2c93c60a9e9308e56ea612
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…h#53665)

Summary:
Pull Request resolved: pytorch#53665

ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly.

There were two reasons the original test didn't pick up on a memory format regression:
- They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)`
- Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D26929683

Pulled By: bdhirsh

fbshipit-source-id: d17bc660ff031e9b3e2c93c60a9e9308e56ea612
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants