Move std and var tests to OpInfos#50901
Move std and var tests to OpInfos#50901peterbell10 wants to merge 9 commits intogh/peterbell10/45/basefrom
Conversation
[ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 3f36143 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
[ghstack-poisoned]
[ghstack-poisoned]
| test_backward = ( | ||
| (dtype.is_complex and op.test_complex_grad) or | ||
| (dtype.is_floating_point and (not op.skip_bfloat16_grad or dtype != torch.bfloat16))) | ||
| samples = op.sample_inputs(device, dtype, requires_grad=test_backward) |
There was a problem hiding this comment.
std and var aren't whitelisted for complex autograd, so test_complex_grad=False needs to never call backward() at all. Why wasn't this here already? Well it looks like sigmoid is the only other user of test_complex_grad and it explicitly skips these two jit tests:
pytorch/torch/testing/_internal/common_methods_invocations.py
Lines 1608 to 1612 in 7fdc6a2
There was a problem hiding this comment.
These changes to the JIT testing look good to me, it was a bad oversight. Thanks for catching it
[ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/peterbell10/45/base #50901 +/- ##
=======================================================
Coverage 80.91% 80.92%
=======================================================
Files 1924 1926 +2
Lines 210008 210019 +11
=======================================================
+ Hits 169923 169950 +27
+ Misses 40085 40069 -16 |
[ghstack-poisoned]
| samples = op.sample_inputs(device, dtype, requires_grad=True) | ||
| test_backward = ( | ||
| (dtype.is_complex and op.test_complex_grad) or | ||
| (dtype.is_floating_point and (not op.skip_bfloat16_grad or dtype != torch.bfloat16))) |
| supports_tensor_out=False, | ||
| test_complex_grad=False, | ||
| test_inplace_grad=False, | ||
| skips=(SkipInfo('TestGradients', 'test_fn_gradgrad'), |
There was a problem hiding this comment.
Would you elaborate on these two skips?
There was a problem hiding this comment.
I removed the first skip, after reviewing it I realised the tests passed as-is. (probably fixed by the test_complex_grad changes here). As for the second skip, the CPU version of std and var have only partial support for half and complex, because they sometimes call into the old TH version for all-reductions:
pytorch/aten/src/ATen/native/ReduceOps.cpp
Lines 1151 to 1156 in ac0a3cc
So for the OpInfo: Half, ComplexFloat and ComplexDouble are declared as unsupported on cpu but not all sample inputs will raise.
There was a problem hiding this comment.
That makes sense. Would you file an issue and add a comment citing it?
|
|
||
|
|
||
| def sample_inputs_std_var(op_info, device, dtype, requires_grad): | ||
| tensor_nd = torch.rand(S, S, S, device=device, dtype=dtype, requires_grad=requires_grad) |
There was a problem hiding this comment.
Why rand and not make_tensor?
There was a problem hiding this comment.
Changed to make_tensor now.
mruberry
left a comment
There was a problem hiding this comment.
Hey @peterbell10!
The cleanup in test_ops.py looks great. I have a suggestion and a couple questions about the OpInfos, though. Should be simple to resolve.
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
|
@mruberry PTAL. |
[ghstack-poisoned]
ghstack-source-id: 8a65fb5 Pull Request resolved: pytorch#50901
Summary: Pull Request resolved: pytorch#50901 Test Plan: Imported from OSS Reviewed By: ngimel Differential Revision: D26083289 Pulled By: mruberry fbshipit-source-id: 7e14ff37bba46dd456e0bc0aa9c4e0a632d0734c
Stack from ghstack:
Differential Revision: D26083289