Add complex autograd support and OpInfo based test for torch.addr#50667
Add complex autograd support and OpInfo based test for torch.addr#50667anjali411 wants to merge 6 commits intogh/anjali411/85/basefrom
Conversation
[ghstack-poisoned]
…ch.addr" [ghstack-poisoned]
…ch.addr" [ghstack-poisoned]
|
Two builds are failing because of test_variant_consistency_jit_addr_cuda_bfloat16 with @zasdfgbnm @ngimel we have a sip to recommend for this, right? Maybe we should think about giving that skip a clearer name so it's easier to read and discover. Update: actually, nevermind. We should fix this in that test and not bother with a more precise skip. @anjali411, would you just add this dtype to the skip list? |
| OpInfo('addr', | ||
| dtypes=all_types_and_complex_and(torch.bool, torch.bfloat16, torch.float16), | ||
| skips=( | ||
| SkipInfo('TestCommon', 'test_variant_consistency_jit', |
There was a problem hiding this comment.
bfloat16 CUDA skip just needs to be added
are you suggesting to add the sample inputs to test broadcasting in the I didn't delete the |
Yes.
That makes sense, but I think it's OK to move and skip in this case. We don't need to update addmm here. |
…ch.addr" [ghstack-poisoned]
…ch.addr" Differential Revision: [D25957584](https://our.internmc.facebook.com/intern/diff/D25957584) [ghstack-poisoned]
…ch.addr" Differential Revision: [D25957584](https://our.internmc.facebook.com/intern/diff/D25957584) [ghstack-poisoned]
|
@anjali411 merged this pull request in 1cc8f8a. |
…torch#50667) Summary: Pull Request resolved: pytorch#50667 Test Plan: Imported from OSS Reviewed By: pbelevich Differential Revision: D25957584 Pulled By: anjali411 fbshipit-source-id: a6b2880971027389721f4e051009b7d9694f979b
Stack from ghstack:
Differential Revision: D25957584