Port linalg_eigh and linalg_eigvalsh to structured#79072
Port linalg_eigh and linalg_eigvalsh to structured#79072lezcano wants to merge 5 commits intogh/Lezcano/88/basefrom
Conversation
This follows the structure of linalg.svd. [ghstack-poisoned]
🔗 Helpful links
✅ No Failures (0 Pending)As of commit 735a532 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
The
|
This follows the structure of linalg.svd. [ghstack-poisoned]
This follows the structure of linalg.svd. [ghstack-poisoned]
| # Could not run 'aten::linalg_eigh' with arguments from the 'AutogradMeta' backend. | ||
| DecorateInfo(unittest.expectedFailure, 'TestFakeTensorNonErroring', 'test_fake'), |
There was a problem hiding this comment.
No clue why I'm seeing this error in this test and in the test_meta.py. Any idea @albanD ?
There was a problem hiding this comment.
There were moves to make Meta a backend bit
cc @bdhirsh do you know what this error is about?
There was a problem hiding this comment.
it probably has to do with Meta being moved to the backend bit, but that error is pretty strange. Let me see if I can debug locally.
There was a problem hiding this comment.
Ah, found it....
Diagnosis: we have a meta function for linalg_eigh written directly in python, that gets registered to the Meta dispatch key:
pytorch/torch/_meta_registrations.py
Line 116 in bc82a5f
I think we should leave the python function around in case its useful, but you can remove the @torch.library.impl(meta_lib, "linalg_eigh") line that registers it to the dispatcher (since this PR will give us meta tensor support through structured kernels, instead).
Why did that break stuff? linalg_eigh is now a CompositeImplicitAutograd kernel, which basically means that we should be getting support for it for all backends (including meta) through its decomposition. By registering a meta kernel to it directly, that basically tells the dispatcher "hey, now that I have a direct kernel for the meta backend, then don't run the decomposition anymore when you're using the meta backend
There was a problem hiding this comment.
I'm also curious what you both think about the naming - we now have a private _linalg_eigh op that implements the behavior of the 4 public ops.
Consolidating the implementation into one op seems great, although backends will now need to support a private operator in order to support better linalg coverage (this is the case for a couple other linalg ops too).
I know there have been some conversations around "decomposing into private / unstable API's" being kinda annoying for backends. What do you think about that w.r.t. to the private linalg ops? Are they just more likely to change over time, so making them private is safer?
There was a problem hiding this comment.
All this are very good points. I had a discussion about this with @mruberry the other day. What we thought is that, for now, the prim operations can simply be those from the public API. Going forward, we may want to have a lower level operation to be the prim with something similar to a LAPACK API, and then have svd, eigh and so on dispatch to these in Python.
In any case, following the current structure:
- Makes implementing operations for backends fairly easily. If they have the LAPACK-like operation, they can always write two helper functions that call into it less than 10 LOC
- Having these operations written as structured kernels makes prims easy to write, and will make moving to a LAPACK-style API later on rather easy.
IvanYashchuk
left a comment
There was a problem hiding this comment.
Looks good!
I don't know about TestFakeTensorNonErroring and how to fix it.
This follows the structure of linalg.svd. [ghstack-poisoned]
albanD
left a comment
There was a problem hiding this comment.
You most likely don't want the ideep submodule change in this PR?
| # Could not run 'aten::linalg_eigh' with arguments from the 'AutogradMeta' backend. | ||
| DecorateInfo(unittest.expectedFailure, 'TestFakeTensorNonErroring', 'test_fake'), |
There was a problem hiding this comment.
There were moves to make Meta a backend bit
cc @bdhirsh do you know what this error is about?
This follows the structure of linalg.svd. [ghstack-poisoned]
|
Sorry for the submodule point. Fixed. Should we just merge this one and figure out the |
albanD
left a comment
There was a problem hiding this comment.
I think that's fine to merge this for now while we wait on resolution for the meta part.
Could you move the discussion to an issue though and pin Brian there?
|
@pytorchbot merge -g |
|
@pytorchbot successfully started a merge job. Check the current status here |
Summary: This follows the structure of linalg.svd. Pull Request resolved: #79072 Approved by: https://github.com/IvanYashchuk, https://github.com/albanD Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/549a597c004f7e0254cd4dd8f1777c57e0f00c00 Reviewed By: malfet Differential Revision: D37156838 fbshipit-source-id: 84607565591fb7ab97238a09dbcc40a1d39cb102
Following #79072 (comment) [ghstack-poisoned]
Following #79072 (comment) ghstack-source-id: 809acf7 Pull Request resolved: #79786
Following #79072 (comment) Pull Request resolved: #79786 Approved by: https://github.com/ngimel, https://github.com/bdhirsh
Summary: Following #79072 (comment) Pull Request resolved: #79786 Approved by: https://github.com/ngimel, https://github.com/bdhirsh Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/648a6658ec7dbf17f815a7d7f4464f9e6f7cb08f Reviewed By: malfet Differential Revision: D37278842 fbshipit-source-id: 29197e92bdfd5f53e901367e4ca96645c6d6f9db
This follows the structure of linalg.svd. Pull Request resolved: pytorch#79072 Approved by: https://github.com/IvanYashchuk, https://github.com/albanD
Following pytorch#79072 (comment) Pull Request resolved: pytorch#79786 Approved by: https://github.com/ngimel, https://github.com/bdhirsh
Stack from ghstack:
This follows the structure of linalg.svd.