Fix TestAutograd.test_pinverse not actually testing#9192
Fix TestAutograd.test_pinverse not actually testing#9192ssnl wants to merge 3 commits intopytorch:masterfrom
Conversation
|
Oops, I am so sorry about this. I thought |
vishwakftw
left a comment
There was a problem hiding this comment.
ONNX changes have come with this PR.
|
@vishwakftw nah, it's mostly my fault. having worked with gradcheck for 8 months and I still forget about this. |
|
Thanks for the reminder! Fixed onnx. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
cc @fritzo @fehiepsi @apaszke on In this patch, I added an error if none of the input args is a tensor that requires grad. In |
|
@pytorchbot retest this please |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
fritzo
left a comment
There was a problem hiding this comment.
distributions changes LGTM
|
cc @t-vi |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@ssnl Ha, thanks for catching this! I think this is as simple as having double: passses all three cases that were failing before. |
|
@t-vi I remember that they fail with a 2x Jacobian difference, but I'll have to try again. |
|
The following get's me true 3x while without the double I get false 3x: |
|
I just tried again. I believe that the random input are already in double precision (the numpy allclose check fails otherwise). The gradcheck still fails with 2x difference for me: # i,i->
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[ 0.6672],
[ 2.9573],
[-1.9153],
[ 1.4615],
[-1.3014]], dtype=torch.float64)
analytical:tensor([[ 0.3336],
[ 1.4787],
[-0.9577],
[ 0.7307],
[-0.6507]], dtype=torch.float64) |
|
oh okay I know why. it's not a bug. it's just that these tests use multiples of same tensor, so numerical perturbation changes not one, but two tensors at the same time! |
|
i'll fix that |
|
Ha. Does the detach() solve that, too? |
|
No.. even if |
|
Ah, but the double() created a separate instance for me... |
Summary: cc vishwakftw Also added a check if none of the input tensors in `gradcheck` have `requires_grad=True`. Closes pytorch#9192 Differential Revision: D8739401 Pulled By: SsnL fbshipit-source-id: 81bb3aa0b5c04eb209b137a4bd978e040e76cbcd
cc @vishwakftw
Also added a check if none of the input tensors in
gradcheckhaverequires_grad=True.