Skip to content

test, fix deepcopy of tensor with grad#50663

Closed
mattip wants to merge 5 commits intopytorch:masterfrom
mattip:deepcopy
Closed

test, fix deepcopy of tensor with grad#50663
mattip wants to merge 5 commits intopytorch:masterfrom
mattip:deepcopy

Conversation

@mattip
Copy link
Copy Markdown
Contributor

@mattip mattip commented Jan 17, 2021

Fixes #3307

Previously, self.grad was not cloned deepcopied to the returned tensor in deepcopy. Added a test and an implementation.

@ngimel ngimel requested a review from albanD January 17, 2021 23:56
@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jan 17, 2021
Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

I added comments inline below.

Comment thread torch/tensor.py Outdated
Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
@mattip
Copy link
Copy Markdown
Contributor Author

mattip commented Jan 21, 2021

@albanD updated

Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update. Just one small addition to the test and this is good to land!

Comment thread test/test_torch.py Outdated
@mattip
Copy link
Copy Markdown
Contributor Author

mattip commented Jan 21, 2021

Not sure what is up with CI, but I don't think this PR is causing the failures

@albanD albanD added the module: bc-breaking Related to a BC-breaking change label Jan 21, 2021
@albanD
Copy link
Copy Markdown
Collaborator

albanD commented Jan 21, 2021

Thanks for the update!

I think the issue is that this test here now copies the grad of the input which is non-zero. So we should zero it out before calling the backward just like we do for input.

@mattip
Copy link
Copy Markdown
Contributor Author

mattip commented Jan 24, 2021

So we should zero it out before calling the backward just like we do for input.

Like in 9ec84d0 ?

Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like in 9ec84d0 ?

Yes that seems to have fixed the test.

Some test now fail because there are merge conflict with master, you can rebase on top of master and re-push to solve these.
Also the check for sharing seems to fail. See my comment below on why I think it is expected.

Comment thread torch/tensor.py Outdated
@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 26, 2021

Codecov Report

Merging #50663 (8a31dd6) into master (186c3da) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #50663      +/-   ##
==========================================
- Coverage   80.91%   80.90%   -0.01%     
==========================================
  Files        1924     1924              
  Lines      210008   210013       +5     
==========================================
- Hits       169923   169919       -4     
- Misses      40085    40094       +9     

Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All green! Thanks!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@albanD merged this pull request in 345844d.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Fixes pytorch#3307

Previously, `self.grad` was not ~cloned~ deepcopied to the returned tensor in `deepcopy`. Added a test and an implementation.

Pull Request resolved: pytorch#50663

Reviewed By: heitorschueroff

Differential Revision: D26074811

Pulled By: albanD

fbshipit-source-id: 536dad36415f1d03714b4ce57453f406ad802b8c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: bc-breaking Related to a BC-breaking change open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

copy.deepcopy does not copy gradient buffers of torch.autograd.Variable instance

5 participants