test, fix deepcopy of tensor with grad#50663
Conversation
albanD
left a comment
There was a problem hiding this comment.
Thanks for the PR!
I added comments inline below.
|
@albanD updated |
albanD
left a comment
There was a problem hiding this comment.
Thanks for the update. Just one small addition to the test and this is good to land!
|
Not sure what is up with CI, but I don't think this PR is causing the failures |
|
Thanks for the update! I think the issue is that this test here now copies the grad of the input which is non-zero. So we should zero it out before calling the backward just like we do for |
Like in 9ec84d0 ? |
albanD
left a comment
There was a problem hiding this comment.
Like in 9ec84d0 ?
Yes that seems to have fixed the test.
Some test now fail because there are merge conflict with master, you can rebase on top of master and re-push to solve these.
Also the check for sharing seems to fail. See my comment below on why I think it is expected.
Codecov Report
@@ Coverage Diff @@
## master #50663 +/- ##
==========================================
- Coverage 80.91% 80.90% -0.01%
==========================================
Files 1924 1924
Lines 210008 210013 +5
==========================================
- Hits 169923 169919 -4
- Misses 40085 40094 +9 |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Fixes pytorch#3307 Previously, `self.grad` was not ~cloned~ deepcopied to the returned tensor in `deepcopy`. Added a test and an implementation. Pull Request resolved: pytorch#50663 Reviewed By: heitorschueroff Differential Revision: D26074811 Pulled By: albanD fbshipit-source-id: 536dad36415f1d03714b4ce57453f406ad802b8c
Fixes #3307
Previously,
self.gradwas notcloneddeepcopied to the returned tensor indeepcopy. Added a test and an implementation.