Add scalar.conj() and update backward formulas for add and sub#46596
Add scalar.conj() and update backward formulas for add and sub#46596anjali411 wants to merge 9 commits intogh/anjali411/66/basefrom
Conversation
[ghstack-poisoned]
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). [ghstack-poisoned]
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 5b6d3a7 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
❄️ 1 failure tentatively classified as flakybut reruns have not yet been triggered to confirm:
|
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). [ghstack-poisoned]
| if (isComplex()) { | ||
| return Scalar(std::conj(v.z)); | ||
| } else if (isFloatingPoint()) { | ||
| return Scalar(v.d); |
There was a problem hiding this comment.
can't you just return *this here?
ezyang
left a comment
There was a problem hiding this comment.
At some point when you're not in a rush, it would be good to use @robieta's tools for instruction counts to assess what the non-complex overheads of the new complex idioms and conjugations are. It will be a little tricky to setup since you have to go through autograd, but we might be able to setup a microbenchmark where we just call the microbenchmark directly. That will help us understand how urgent extra optimizing here is (which I know @ngimel is nervous about the conj additions here.)
mruberry
left a comment
There was a problem hiding this comment.
Test changes look good. Test failure doesn't appear to be related.
| - name: sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor | ||
| self: grad | ||
| other: -grad * alpha | ||
| self: handle_r_to_c(self.scalar_type(), grad) |
There was a problem hiding this comment.
why do we need this? This seems pretty similar to
pytorch/torch/csrc/autograd/engine.cpp
Line 611 in e02a3e1
to doesn't work c->r?
There was a problem hiding this comment.
.to works for C->R. currently handle_r_to_c returns a (real) view of the final gradient value in case the gradient is complex and the input is not.
I think that's a great idea! is there a doc to help with the setup? |
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). [ghstack-poisoned]
|
Hopefully Taylor's talk today should help? He's also been working on written docs as well. |
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227) [ghstack-poisoned]
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227) [ghstack-poisoned]
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227) [ghstack-poisoned]
… sub" 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227) [ghstack-poisoned]
|
@anjali411 merged this pull request in cedeee2. |
| ('add', (), ((S, S, S),), 'scalar_broadcast_lhs', (True,)), | ||
| ('add', (S, S, S), (3.14,), 'constant', (True,)), | ||
| ('add', (), (3.14,), 'scalar_constant', (True,)), | ||
| ('add', (S, S, S), (3.14j,), 'complex_scalar_constant', (True,)), |
There was a problem hiding this comment.
Does this actually calls into the scalar overload? I think that the python binding is converting these to Tensors all the time.
You can double check this by doing:
import torch
a = torch.rand(10, requires_grad=True)
print((a + a).grad_fn)
print((a + 1.).grad_fn)That will print
<AddBackward0 object at 0x7fc1c1446150>
<AddBackward0 object at 0x7fc1c2f14f50>
The important part is the 0 here that means that both are actually using the 0th overload (the one for Tensor inputs).
…ch#46596) Summary: Pull Request resolved: pytorch#46596 1. Added `conj` method for scalar similar to numpy. 2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex. 3. Enabled complex backward for nonzero (no formula update needed). Test Plan: Imported from OSS Reviewed By: glaringlee Differential Revision: D24529227 Pulled By: anjali411 fbshipit-source-id: da871309a6decf5a4ab5c561d5ab35fc66b5273d
conjmethod for scalar similar to numpy.Stack from ghstack:
Differential Revision: D24529227