Implement torch.pinverse : Pseudo-inverse#9052
Implement torch.pinverse : Pseudo-inverse#9052vishwakftw wants to merge 14 commits intopytorch:masterfrom
Conversation
1. Used SVD to compute. 2. Tests in test_cuda and test_torch 3. Doc strings in _torch_docs.py and _tensor_docs.py Closes pytorch#6187
| } | ||
|
|
||
| Tensor pinverse(const Tensor& self) { | ||
| if (!at::isFloatingType(self.type().scalarType()) || |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
| Tensor U, S, V; | ||
| std::tie(U, S, V) = self.svd(); | ||
| Tensor S_pseudoinv = at::where(S != 0.0, S.reciprocal(), at::zeros({}, self.type())); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
1. Use AT_CHECK
2. Use at::zeros({}, self.options);
|
Not sure why the CUDA tests are failing. @ssnl |
| "of floating types"); | ||
| Tensor U, S, V; | ||
| std::tie(U, S, V) = self.svd(); | ||
| Tensor S_pseudoinv = at::where(S != 0.0, S.reciprocal(), at::zeros({}, self.options())); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
A few comments: @ssnl
|
|
Very valid reasons. Thanks for your explanation!
Let me know what you think! |
|
I think, |
|
@ssnl Sounds good to me. I have added those tests in However, the build is still failing. Could it be a precision problem with GPUs? |
|
It could be. Let's wait until CI finishes. |
test/test_autograd.py
Outdated
| s = torch.arange(1., l + 1).mul_(1.0 / (l + 1)) | ||
| return u.mm(torch.diag(s)).mm(v.t()) | ||
|
|
||
| def random_matrix_large_singular_value(m, n): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
how big is the difference in CUDA failure? |
|
About 5.5 - 6 . |
|
Mac OS build failure seems unrelated. |
|
@pytorchbot retest this please |
| return M.pinverse() | ||
|
|
||
| gradcheck(func, [torch.rand(m) + 1]) | ||
| gradcheck(func, [torch.rand(m) + 10]) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@pytorchbot retest this please |
1 similar comment
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
| run_test((10,), 0) | ||
|
|
||
| def test_pinverse(self): | ||
| m, n = 5, 10 |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@vishwakftw The PR is great! Thanks for doing this :). There are just some internal problems that I'm trying to fix before I can merge this. |
Summary: 1. Used SVD to compute. 2. Tests in test_autograd, test_cuda and test_torch 3. Doc strings in _torch_docs.py and _tensor_docs.py Closes #6187 Closes pytorch/pytorch#9052 Reviewed By: soumith Differential Revision: D8714628 Pulled By: SsnL fbshipit-source-id: 7e006c9d138b9f49e703bd0ffdabe6253be78dd9
Summary: 1. Used SVD to compute. 2. Tests in test_autograd, test_cuda and test_torch 3. Doc strings in _torch_docs.py and _tensor_docs.py Closes #6187 Closes pytorch/pytorch#9052 Reviewed By: soumith Differential Revision: D8714628 Pulled By: SsnL fbshipit-source-id: 7e006c9d138b9f49e703bd0ffdabe6253be78dd9
Summary: Fixes pytorch#9079 There is room for speed-up for both functions (see pytorch#9083), but let's get this in to unblock pytorch#9052 . Closes pytorch#9082 Reviewed By: ezyang Differential Revision: D8711687 Pulled By: SsnL fbshipit-source-id: f043a9bf55cb6aec5126c3331d35761f7aa3f8e3
Summary: 1. Used SVD to compute. 2. Tests in test_autograd, test_cuda and test_torch 3. Doc strings in _torch_docs.py and _tensor_docs.py Closes pytorch#6187 Closes pytorch#9052 Reviewed By: soumith Differential Revision: D8714628 Pulled By: SsnL fbshipit-source-id: 7e006c9d138b9f49e703bd0ffdabe6253be78dd9
|
It could be useful to adapt torch.pinverse to work with batches of tensors like done for torch.inverse in v1.0.0 |
|
Hi @mscipio, batching for pinverse is blocked due to unavailability of batched SVD, which is used internally to compute the pinverse. |
Closes #6187