Added OpInfo-based testing of some linalg functions#51107
Added OpInfo-based testing of some linalg functions#51107IvanYashchuk wants to merge 33 commits intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit 558d744 (more details on the Dr. CI page):
🚧 2 fixed upstream failures:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
|
Tests for QR decomposition are quite slow |
|
The failing tests should be fixed after merging #51109. |
Codecov Report
@@ Coverage Diff @@
## master #51107 +/- ##
==========================================
+ Coverage 77.29% 77.30% +0.01%
==========================================
Files 1888 1888
Lines 183512 183613 +101
==========================================
+ Hits 141852 141951 +99
- Misses 41660 41662 +2 |
| op=torch.solve, | ||
| dtypes=floating_and_complex_types(), | ||
| test_inplace_grad=False, | ||
| # TODO: TypeError: empty_like(): argument 'input' (position 1) must be Tensor, not torch.return_types.solve |
There was a problem hiding this comment.
I appreciate this TODO but this is more a failure of our out= testing (which we should be updating soonish) and not an issue with torch.solve().
mruberry
left a comment
There was a problem hiding this comment.
Awesome! Nice work, @IvanYashchuk. I really appreciate how thorough and consistent you were with documentation. It makes the code much more readable and easier to maintain.
This needs a rebase. Just ping me when it's ready to merge.
|
@mruberry I updated and rebased this PR. It's ready to merge. |
|
The ROCm test failures on this PR are interesting and linalg related. Would you take a look at them before we merge this? |
|
It was my fault not being careful enough when resolving the merge conflict and I accidentally removed |
ROCm fails with RuntimeError: magma: The value of work_size(-9223372036854775808) is too large to fit into a magma_int_t (4 bytes)
|
@mruberry I updated this pull request with the recent changes for "out" testing. I also added all relevant ROCm skips, which will be fixed sometime later. Could you take a look once again and hopefully merge this? |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
I'll try to get this landed ASAP. I was concerned about the ROCm failure because it's a timeout, but that test build doesn't appear to be running these tests. The overall test time seems similar to current CI timings. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Added OpInfo-based testing of the following linear algebra functions: * cholesky, linalg.cholesky * linalg.eigh * inverse, linalg.inv * qr, linalg.qr * solve The output of `torch.linalg.pinv` for empty inputs was not differentiable, now it's fixed. In some cases, batched grad checks are disabled because it doesn't work well with 0x0 matrices (see pytorch#50743 (comment)). Ref. pytorch#50006 Pull Request resolved: pytorch#51107 Reviewed By: albanD Differential Revision: D27006115 Pulled By: mruberry fbshipit-source-id: 3c1d00e3d506948da25d612fb114e6d4a478c5b1
Summary: Added OpInfo-based testing of the following linear algebra functions: * cholesky, linalg.cholesky * linalg.eigh * inverse, linalg.inv * qr, linalg.qr * solve The output of `torch.linalg.pinv` for empty inputs was not differentiable, now it's fixed. In some cases, batched grad checks are disabled because it doesn't work well with 0x0 matrices (see pytorch#50743 (comment)). Ref. pytorch#50006 Pull Request resolved: pytorch#51107 Reviewed By: albanD Differential Revision: D27006115 Pulled By: mruberry fbshipit-source-id: 3c1d00e3d506948da25d612fb114e6d4a478c5b1
Added OpInfo-based testing of the following linear algebra functions:
The output of
torch.linalg.pinvfor empty inputs was not differentiable, now it's fixed.In some cases, batched grad checks are disabled because it doesn't work well with 0x0 matrices (see #50743 (comment)).
Ref. #50006