Adds reference vs. noncontiguous OpInfo test#67434
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 52b5109 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
This will still require a follow-up issue to evaluate whether noncontiguous sample inputs should be used in more contexts (like when testing autograd) and if they can be removed from existing sample input functions. |
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
| requires_grad=requires_grad), | ||
| args=(make_tensor(second_shape, device, dtype, | ||
| requires_grad=requires_grad),))) | ||
| SampleInput(make_arg(first_shape).requires_grad_(requires_grad), |
There was a problem hiding this comment.
nit: make_arg(first_shape, requires_grad=requires_grad), partial accepts that
There was a problem hiding this comment.
Nice catch -- fixed.
| for tensor, idx, source, a in product([t, t_nonctg], [idx, idx_nonctg], [s, s_nonctg], [-1, 0, 2])) | ||
|
|
||
| samples = [SampleInput(t.detach().clone().requires_grad_(requires_grad), | ||
| args=(1, idx.detach().clone(), s.detach().clone()))] |
There was a problem hiding this comment.
did you lose s's requires_grad here? Would autograd tests normally check all inputs that require grad?
There was a problem hiding this comment.
Great catch -- yes I did on the s (source) tensor
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
…h into noncontiguous_tests
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Fixes pytorch#63341. This PR adds a new test, `test_noncontigous_samples`, that runs ops forward and backward and compares their outputs and grads between "normal" contiguous SampleInputs and noncontiguous SampleInputs. This test should preclude the need for noncontiguous SampleInputs going forward. The test was added by generalizing the `.numpy()` transform on SampleInputs to support a new `.noncontiguous()` transform and copying forward/backward patterns from other tests in test_ops.py. It also discovered that many SampleInputs were incorrectly reusing tensors, so those have been revised. SampleInputs creating noncontiguous tensors for testing have also been altered to no longer do so. In addition, this test discovered the following high priority silent correctness issues: - pytorch#67432 - pytorch#67517 - pytorch#67513 - pytorch#67512 - pytorch#67470 It also identified the following issues: - pytorch#67539 The pow OpInfo also incorrectly specified that pow supported the bool datatype, and this has been fixed. Its SampleInputs were written in a way that made requests for boolean SampleInputs return type promoting inputs that never actually tried to compute pow in bool. This PR suggests we should add the following guidance for writing SampleInputs: - ensure that all SampleInputs are independent of each other (don't reuse tensors) - ensure that all SampleInput tensors have no grad or backward functions (no autograd history) -- they should be leaves - prefer keeping sample inputs simple where possible, a good set of handwritten samples that test interesting cases may be better than an exhaustive but hard to read and maintain programmatic enumeration - keep code readable by using functools.partial and writing simple inline helpers; break up large statements into a more readable series of smaller statements; especially don't write complicated generator expressions with a `for` at the end! fyi kshitij12345 krshrimali pmeier anjali411 saketh-are zou3519 dagitses Pull Request resolved: pytorch#67434 Reviewed By: ngimel Differential Revision: D32014557 Pulled By: mruberry fbshipit-source-id: b17e19adc1d41e24441f0765af13d381fef5e3c1
Fixes #63341.
This PR adds a new test,
test_noncontigous_samples, that runs ops forward and backward and compares their outputs and grads between "normal" contiguous SampleInputs and noncontiguous SampleInputs. This test should preclude the need for noncontiguous SampleInputs going forward.The test was added by generalizing the
.numpy()transform on SampleInputs to support a new.noncontiguous()transform and copying forward/backward patterns from other tests in test_ops.py. It also discovered that many SampleInputs were incorrectly reusing tensors, so those have been revised. SampleInputs creating noncontiguous tensors for testing have also been altered to no longer do so.In addition, this test discovered the following high priority silent correctness issues:
It also identified the following issues:
The pow OpInfo also incorrectly specified that pow supported the bool datatype, and this has been fixed. Its SampleInputs were written in a way that made requests for boolean SampleInputs return type promoting inputs that never actually tried to compute pow in bool.
This PR suggests we should add the following guidance for writing SampleInputs:
forat the end!fyi @kshitij12345 @krshrimali @pmeier @anjali411 @saketh-are @zou3519 @dagitses