Add torch.eig complex forward (CPU, CUDA)#49168
Add torch.eig complex forward (CPU, CUDA)#49168antocuni wants to merge 14 commits intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit e1c1ebe (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
… wi separately, we pass only w and compute the two sub-arrays later, and add a (so far unused) rwork argument
…around until the code compiles again. Add empty implementations for lapackEig on complex types
…to cgeev_; test_eig_basic stil fails though
…the one used by lapackEig
…better name for real_dtype to make it clearer
|
it seems that all the failing tests also fail upstream, so I think that this PR is ready for review |
|
Test failures are unrelated. + @anjali411 to review, too. This looks correct to me. |
|
Is there any progress on this? Thinking about rewriting the |
|
Thanks for the ping, @nikitaved, looks like this got lost over the holidays. I'll take a note to schedule it. |
|
|
||
| // the API is slightly different for the complex vs real case: if the input | ||
| // is complex, eigenvals will be a vector of complex. If the input is real, | ||
| // eigenvals will be a (n, 2) matrix containing the real and imaginary parts |
There was a problem hiding this comment.
As discussed in #43081, I think we should always return a complex tensor. This will be a bc breaking change, so we should only update this behavior for torch.linalg.eig
There was a problem hiding this comment.
I thought that the point of torch.linalg.* was to be as compatible to numpy as possible. This would be an unnecessary breakage for people porting their code from numpy to pytorch.
What about adding a flag such as returns_complex=False (or =True if we want the correct-but-numpy-incompatible behavior by default) to let the user choose?
There was a problem hiding this comment.
Hey, @antocuni , looks like Numpy is already doing it:
In [18]: x = numpy.array([[1, 2], [-2, 1]], numpy.double)
In [19]: x
Out[19]:
array([[ 1., 2.],
[-2., 1.]])
In [20]: numpy.linalg.eig(x)
Out[20]:
(array([1.+2.j, 1.-2.j]),
array([[0. -0.70710678j, 0. +0.70710678j],
[0.70710678+0.j , 0.70710678-0.j ]]))
There was a problem hiding this comment.
ouch, my fault, I overlooked it. Ok, I'll implement the "proper" behavior then, thanks for pointing it out
There was a problem hiding this comment.
@anjali411 @nikitaved I just realized that torch.linalg.eig doesn't exists yet!
So I think that the current behavior of my branch is correct. I agree that torch.linalg.eig should always return complex, but I don't think there is anything we can do in this branch
I think the branch is ready for a final round of review and hopefully merging |
mruberry
left a comment
There was a problem hiding this comment.
Cool! Would you just add a test that this throws a runtime error when trying to do complex backward through it, @antocuni? Then just ping me and we'll get this merged.
Adding these functions makes a lot of sense since I imagine we'll reuse them for torch.linalg.eig, but maybe we should implement torch.linalg.eig next and not bother implementing complex backward for torch.eig?
Codecov Report
@@ Coverage Diff @@
## master #49168 +/- ##
==========================================
- Coverage 80.64% 80.64% -0.01%
==========================================
Files 1913 1913
Lines 208061 208077 +16
==========================================
+ Hits 167797 167807 +10
- Misses 40264 40270 +6 |
@mruberry done by commit 01c168d
It's very likely that we will be able to implement I think that @nikitaved planned to work on complex backward for |
That's probably true. Whatever @nikitaved thinks best. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
I agree, the order does not matter, the backward for real inputs with complex eigenvalues is not implemented anyway, so we will need to just insert a check for dtype or shape (if the tensor representing eigenvalues is real but has real and complex parts). |
|
Unfortunately it looks like this is hitting some internal errors, like this: cc @IvanYashchuk and @ngimel I have to run but can take a closer look later, too. |
|
From the filename, my guess is that CLAPACK is used somewhere and it is missing |
Good guess: |
|
Update: @malfet has identified the issue. Some builds were excluding c_sqrt because of issues compiling it on Windows. He's testing a fix now. |
|
The attribution above is erroneous, @malfet fixed and merged this PR by fixing Windows builds using CLAPACK so they could built c_sqrt properly, then fixing the build rules for those builds so they included it again. |
Summary: Related to issue pytorch#42666 Pull Request resolved: pytorch#49168 Reviewed By: mrshenli Differential Revision: D25954027 Pulled By: mruberry fbshipit-source-id: e429f9587efff5e638bfd0e4de864c06f41c63b1
Related to issue #42666