update upsample tests in test_nn.py to test for memory_format#53665
update upsample tests in test_nn.py to test for memory_format#53665bdhirsh wants to merge 3 commits intogh/bdhirsh/87/basefrom
Conversation
[ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 981e490 (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
| with warnings.catch_warnings(record=True) as w: | ||
| out_t = m(in_t) | ||
| self.assertEqual(torch.ones(1, 1, out_size, out_size), out_t.data) | ||
| for memory_format in [torch.contiguous_format, torch.channels_last]: |
There was a problem hiding this comment.
GitHub displays this with a lot of red and green, but all I really did was add a for loop here to loop over the different memory formats. Then further down I assert that the output is contiguous w.r.t the expected memory format.
…mat" @ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly. There were two reasons the original test didn't pick up on a memory format regression: - They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)` - Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides. Differential Revision: [D26929683](https://our.internmc.facebook.com/intern/diff/D26929683) [ghstack-poisoned]
ngimel
left a comment
There was a problem hiding this comment.
Thanks! Please get rid of .data attributes, otherwise looks good.
| out_uint8_t = m(in_uint8_t) | ||
| self.assertEqual(torch.ones(1, 1, 4, 4).contiguous(memory_format=memory_format), out_t.data) | ||
| self.assertEqual(torch.ones(1, 1, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data) | ||
| self.assertEqual(torch.ones(1, 2, 4, 4).contiguous(memory_format=memory_format), out_t.data) |
There was a problem hiding this comment.
nit: .data is deprecated API, and you don't need to call .contiguous() on the expected tensor, so
self.assertEqual(torch.ones(...), out_t)
| self.assertEqual(torch.ones(1, 1, 4, 4).contiguous(memory_format=memory_format), out_t.data) | ||
| self.assertEqual(torch.ones(1, 1, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data) | ||
| self.assertEqual(torch.ones(1, 2, 4, 4).contiguous(memory_format=memory_format), out_t.data) | ||
| self.assertEqual(torch.ones(1, 2, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data) |
| self.assertEqual(torch.ones(1, 2, 4, 4).contiguous(memory_format=memory_format), out_t.data) | ||
| self.assertEqual(torch.ones(1, 2, 4, 4, dtype=torch.uint8).contiguous(memory_format=memory_format), out_uint8_t.data) | ||
| # Assert that memory format is carried through to the output | ||
| self.assertTrue(out_t.is_contiguous(memory_format=memory_format)) |
| with warnings.catch_warnings(record=True) as w: | ||
| out_t = m(in_t) | ||
| self.assertEqual(torch.ones(1, 1, 4, 2).contiguous(memory_format=memory_format), out_t.data) | ||
| self.assertEqual(torch.ones(1, 2, 4, 2).contiguous(memory_format=memory_format), out_t.data) |
There was a problem hiding this comment.
probably makes sense to test memory_format here too?
…mat" @ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly. There were two reasons the original test didn't pick up on a memory format regression: - They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)` - Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides. Differential Revision: [D26929683](https://our.internmc.facebook.com/intern/diff/D26929683) [ghstack-poisoned]
|
Thanks a lot for following up on this! |
Codecov Report
@@ Coverage Diff @@
## gh/bdhirsh/87/base #53665 +/- ##
===================================================
Coverage 77.64% 77.65%
===================================================
Files 1869 1869
Lines 182291 182291
===================================================
+ Hits 141548 141549 +1
+ Misses 40743 40742 -1 |
…h#53665) Summary: Pull Request resolved: pytorch#53665 ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly. There were two reasons the original test didn't pick up on a memory format regression: - They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)` - Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides. Test Plan: Imported from OSS Reviewed By: ngimel Differential Revision: D26929683 Pulled By: bdhirsh fbshipit-source-id: d17bc660ff031e9b3e2c93c60a9e9308e56ea612
…h#53665) Summary: Pull Request resolved: pytorch#53665 ngimel pointed out to me where we already test the behavior of the `Upsample` ops in `test_nn.py`. This PR deleting my bespoke tests in `test_torch.py` and updates those in `test_nn.py` to test memory format properly. There were two reasons the original test didn't pick up on a memory format regression: - They didn't test the memory format of the output tensor explicitly, i.e. `output.is_contiguous(memory_format=...)` - Even with that change, the test tensors were to simple to fail the tests. From some trial and error, it looks like one of the first two dimensions in the inputs needs to be > 1 in order for the `channels_last` memory format to actually re-order the strides. Test Plan: Imported from OSS Reviewed By: ngimel Differential Revision: D26929683 Pulled By: bdhirsh fbshipit-source-id: d17bc660ff031e9b3e2c93c60a9e9308e56ea612
@ngimel pointed out to me where we already test the behavior of the
Upsampleops intest_nn.py. This PR deleting my bespoke tests intest_torch.pyand updates those intest_nn.pyto test memory format properly.There were two reasons the original test didn't pick up on a memory format regression:
output.is_contiguous(memory_format=...)channels_lastmemory format to actually re-order the strides.Stack from ghstack:
Differential Revision: D26929683