-
Notifications
You must be signed in to change notification settings - Fork 27.7k
torch.linalg.eigh: very slow for batched inputs #174674
Copy link
Copy link
Closed
Labels
bot-triagedThis is a label only to be used by the auto triage botThis is a label only to be used by the auto triage botmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmultopic: performancetopic categorytopic categorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Metadata
Metadata
Assignees
Labels
bot-triagedThis is a label only to be used by the auto triage botThis is a label only to be used by the auto triage botmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmultopic: performancetopic categorytopic categorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
First observed in #174601 (discovered by @alexshtf)
eighis much slower on PyTorch compared to CuPy.cmp.py:
When we run it:
We can see that CuPy is much faster. It seems we should re-consider our heuristics for choosing suitable cuSOLVER drivers.
Versions
cc @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv @nWEIdia @jianyuh @mruberry @walterddr @xwang233 @lezcano