Skip to content

[BC-breaking] C++ API parity: at::Tensor::grad#26150

Closed
pbelevich wants to merge 7 commits intogh/pbelevich/2/basefrom
gh/pbelevich/2/head
Closed

[BC-breaking] C++ API parity: at::Tensor::grad#26150
pbelevich wants to merge 7 commits intogh/pbelevich/2/basefrom
gh/pbelevich/2/head

Conversation

@pbelevich
Copy link
Copy Markdown
Contributor

@pbelevich pbelevich commented Sep 13, 2019

Stack from ghstack:

Pull Request resolved: #26150

Differential Revision: D17427579

@pytorchbot pytorchbot added the module: cpp Related to C++ API label Sep 13, 2019
pbelevich added a commit that referenced this pull request Sep 13, 2019
ghstack-source-id: 23a56aa
Pull Request resolved: #26150
pbelevich added a commit that referenced this pull request Sep 13, 2019
ghstack-source-id: 8f7dcb5
Pull Request resolved: #26150
@yf225
Copy link
Copy Markdown
Contributor

yf225 commented Sep 13, 2019

@pbelevich Thanks for the tests! Curious did you resolve why there is a difference in behavior between Python and C++ for the following code?

# Python

In [1]: import torch

In [2]: x = torch.ones(1, dtype=float, requires_grad=True)

In [3]: x.backward()

In [4]: x.grad
Out[4]: tensor([1.], dtype=torch.float64)
// C++
auto tensor = torch::ones(1, at::TensorOptions().dtype(at::kFloat).requires_grad(true));
tensor.backward();
std::cout << tensor.grad() << std::endl;
// [ Tensor (undefined) ]

From our earlier discussion, Python tensor.backward() is defined in torch/tensor.py, which has a different implementation than Variable::backward(), and it would be ideal to let the C++ implementation also call torch::autograd::backward to have feature parity with Python. Curious do you see any roadblocks when making the change?

pbelevich added a commit that referenced this pull request Sep 15, 2019
@pbelevich pbelevich requested a review from apaszke as a code owner September 15, 2019 01:20
@pytorchbot pytorchbot added the module: autograd Related to torch.autograd, and the autograd engine in general label Sep 15, 2019
pbelevich added a commit that referenced this pull request Sep 15, 2019
ghstack-source-id: 41b67ba
Pull Request resolved: #26150
@pbelevich pbelevich removed the request for review from apaszke September 15, 2019 01:30
This was referenced Sep 15, 2019
pbelevich added a commit that referenced this pull request Sep 15, 2019
pbelevich added a commit that referenced this pull request Sep 16, 2019
pbelevich added a commit that referenced this pull request Sep 16, 2019
pbelevich added a commit that referenced this pull request Sep 16, 2019
@yf225 yf225 added the module: bc-breaking Related to a BC-breaking change label Sep 17, 2019
@yf225 yf225 changed the title C++ API parity: at::Tensor::grad [BC-breaking] C++ API parity: at::Tensor::grad Sep 17, 2019
Copy link
Copy Markdown
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing Tensor::backward()! I believe we will need a BC-breaking note in the PR description to explain what previously working C++ API use cases would break after this PR.

Copy link
Copy Markdown
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can land this PR after BC-breaking note is added

@suo
Copy link
Copy Markdown
Member

suo commented Sep 18, 2019

This appears to have broken the XLA build cc @ailzhang

@ailzhang
Copy link
Copy Markdown
Contributor

Thanks @suo! pytorch/xla#1065 will be merged soon to fix it! :D

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@pbelevich merged this pull request in 98ccae0.

pbelevich added a commit to pbelevich/pytorch that referenced this pull request Sep 20, 2019
ghstack-source-id: 0006728
Pull Request resolved: pytorch#26150
@gchanan
Copy link
Copy Markdown
Contributor

gchanan commented Sep 24, 2019

where is the actual note saying what is BC breaking?

@gchanan
Copy link
Copy Markdown
Contributor

gchanan commented Oct 3, 2019

ok, it looks like the BC issue is that for non-1-element tensors you no longer create the grad_output to pass through backwards.

@facebook-github-bot facebook-github-bot deleted the gh/pbelevich/2/head branch October 28, 2019 22:17
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary: Pull Request resolved: pytorch#26150

Test Plan: Imported from OSS

Differential Revision: D17427579

Pulled By: pbelevich

fbshipit-source-id: 68d012076aa86dee9f23fad71a2d265d75f56d22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: autograd Related to torch.autograd, and the autograd engine in general module: bc-breaking Related to a BC-breaking change module: cpp Related to C++ API

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants