Skip to content

Add scalar.conj() and update backward formulas for add and sub#46596

Closed
anjali411 wants to merge 9 commits intogh/anjali411/66/basefrom
gh/anjali411/66/head
Closed

Add scalar.conj() and update backward formulas for add and sub#46596
anjali411 wants to merge 9 commits intogh/anjali411/66/basefrom
gh/anjali411/66/head

Conversation

@anjali411
Copy link
Copy Markdown
Contributor

@anjali411 anjali411 commented Oct 20, 2020

  1. Added conj method for scalar similar to numpy.
  2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
  3. Enabled complex backward for nonzero (no formula update needed).

Stack from ghstack:

Differential Revision: D24529227

@anjali411 anjali411 requested review from albanD and gchanan October 20, 2020 17:17
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
@anjali411 anjali411 added complex_autograd module: complex Related to complex number support in PyTorch labels Oct 20, 2020
@anjali411 anjali411 requested a review from ezyang October 20, 2020 17:22
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Oct 20, 2020

💊 CI failures summary and remediations

As of commit 5b6d3a7 (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_linux_xenial_py3_clang5_android_ndk_r19c_vulkan_x86_32_build (1/1)

Step: "Build" (full log | diagnosis details | 🔁 rerun) ❄️

Error response from daemon: received unexpected HTTP status: 500 Internal Server Error
Parallel backend flags:  
DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:4d22616f4e3faf830d069137acd2208e7a023625 
Error response from daemon: received unexpected HTTP status: 500 Internal Server Error 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 19 times.

… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 20, 2020
@anjali411 anjali411 requested review from mruberry and zou3519 October 21, 2020 00:45
Comment thread c10/core/Scalar.cpp Outdated
if (isComplex()) {
return Scalar(std::conj(v.z));
} else if (isFloatingPoint()) {
return Scalar(v.d);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't you just return *this here?

Copy link
Copy Markdown
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point when you're not in a rush, it would be good to use @robieta's tools for instruction counts to assess what the non-complex overheads of the new complex idioms and conjugations are. It will be a little tricky to setup since you have to go through autograd, but we might be able to setup a microbenchmark where we just call the microbenchmark directly. That will help us understand how urgent extra optimizing here is (which I know @ngimel is nervous about the conj additions here.)

Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test changes look good. Test failure doesn't appear to be related.

Comment thread aten/src/ATen/test/scalar_test.cpp Outdated
Comment thread tools/autograd/gen_variable_type.py Outdated
- name: sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor
self: grad
other: -grad * alpha
self: handle_r_to_c(self.scalar_type(), grad)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this? This seems pretty similar to

grad = grad.to(c10::typeMetaToScalarType(metadata.options().dtype()));
-- is it because to doesn't work c->r?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.to works for C->R. currently handle_r_to_c returns a (real) view of the final gradient value in case the gradient is complex and the input is not.

@anjali411
Copy link
Copy Markdown
Contributor Author

anjali411 commented Oct 22, 2020

At some point when you're not in a rush, it would be good to use @robieta's tools for instruction counts to assess what the non-complex overheads of the new complex idioms and conjugations are. It will be a little tricky to setup since you have to go through autograd, but we might be able to setup a microbenchmark where we just call the microbenchmark directly. That will help us understand how urgent extra optimizing here is (which I know @ngimel is nervous about the conj additions here.)

I think that's a great idea! is there a doc to help with the setup?

… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 22, 2020
@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Oct 23, 2020

Hopefully Taylor's talk today should help? He's also been working on written docs as well.

… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 26, 2020
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 27, 2020
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 28, 2020
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Nov 2, 2020
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@anjali411 merged this pull request in cedeee2.

@facebook-github-bot facebook-github-bot deleted the gh/anjali411/66/head branch November 6, 2020 15:19
('add', (), ((S, S, S),), 'scalar_broadcast_lhs', (True,)),
('add', (S, S, S), (3.14,), 'constant', (True,)),
('add', (), (3.14,), 'scalar_constant', (True,)),
('add', (S, S, S), (3.14j,), 'complex_scalar_constant', (True,)),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this actually calls into the scalar overload? I think that the python binding is converting these to Tensors all the time.

You can double check this by doing:

import torch

a = torch.rand(10, requires_grad=True)
print((a + a).grad_fn)
print((a + 1.).grad_fn)

That will print

<AddBackward0 object at 0x7fc1c1446150>
<AddBackward0 object at 0x7fc1c2f14f50>

The important part is the 0 here that means that both are actually using the 0th overload (the one for Tensor inputs).

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…ch#46596)

Summary:
Pull Request resolved: pytorch#46596

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24529227

Pulled By: anjali411

fbshipit-source-id: da871309a6decf5a4ab5c561d5ab35fc66b5273d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed complex_autograd Merged module: complex Related to complex number support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants