[ATen][CPU][Sparse] Use Third-Party Eigen for sparse add and addmm#155357
[ATen][CPU][Sparse] Use Third-Party Eigen for sparse add and addmm#155357Aidyn-A wants to merge 9 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155357
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit c77592f with merge base 01bcf9a ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
71d7617 to
8db7338
Compare
8db7338 to
df77628
Compare
There was a problem hiding this comment.
Thanks for the PR, it looks good to me!
I have a number of nits and one question, see #155357 (comment)
df77628 to
36d5c43
Compare
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
1afd750 to
2ea6b20
Compare
pearu
left a comment
There was a problem hiding this comment.
Here is another review iteration that removes duplicated code.
Question: there exists multiple paths for addmm depending on the layout combinations of input and output tensors. Are all these paths covered by existing unittests?
2ea6b20 to
2c69b71
Compare
pearu
left a comment
There was a problem hiding this comment.
This looks good to me, however, see my comment below about the suspicion that there exists a bug in addmm_out_sparse_eigen logic that should be revealed in tests.
|
@pytorchbot revert -c ghfirst -m "This is causing buck builds to fail since we didn't add the definition of |
|
❌ 🤖 pytorchbot command failed: |
|
@pytorchbot revert -c ghfirst -m "This is causing buck builds to fail since we didn't add the definition of AT_USE_EIGEN_SPARSE in the buckbuild.bzl file, will follow-up and re-land this." |
|
@pytorchbot successfully started a revert job. Check the current status here. |
…addmm (#155357)" This reverts commit ce048de. Reverted #155357 on behalf of https://github.com/seemethere due to This is causing buck builds to fail since we didn't add the definition of AT_USE_EIGEN_SPARSE in the buckbuild.bzl file, will follow-up and re-land this. ([comment](#155357 (comment)))
|
@Aidyn-A your PR has been successfully reverted. |
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
|
@seemethere has imported this pull request. If you are a Meta employee, you can view this in D80750986. |
…addmm (pytorch#155357)" This reverts commit ce048de. Reverted pytorch#155357 on behalf of https://github.com/seemethere due to This is causing buck builds to fail since we didn't add the definition of AT_USE_EIGEN_SPARSE in the buckbuild.bzl file, will follow-up and re-land this. ([comment](pytorch#155357 (comment)))
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: Meta Internal-Only Changes Check Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -f 'merged internally already' |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ytorch#155357) This pull request adds the following ops for sparse matrices using Eigen library: ```python add(a_csr, b_csr) add(a_csc, b_csc) addmm(c_csr, a_csr, b_csr) addmm(c_csr, a_csr, b_csc) addmm(c_csr, a_csc, b_csc) addmm(c_csr, a_csc, b_csr) addmm(c_csc, a_csr, b_csr) addmm(c_csc, a_csr, b_csc) addmm(c_csc, a_csc, b_csc) addmm(c_csc, a_csc, b_csr) ``` Currently, the operations for sparse matrices on CPU are available through MKL only. The non-existence of MKL on `aarch64` causes the unavailability of these ops on any machines with ARM based CPUs, including Apple Silicon, AWS Graviton and NVIDIA Grace. This PR addresses this issue by using Eigen as a backend for the above ops. This is a re-factored version of my previous PR pytorch#101814. The main difference with the old one, this does not enable Eigen by default. Pull Request resolved: pytorch#155357 Approved by: https://github.com/pearu, https://github.com/eqy
…addmm (pytorch#155357)" This reverts commit ce048de. Reverted pytorch#155357 on behalf of https://github.com/seemethere due to This is causing buck builds to fail since we didn't add the definition of AT_USE_EIGEN_SPARSE in the buckbuild.bzl file, will follow-up and re-land this. ([comment](pytorch#155357 (comment)))
…ytorch#155357) This pull request adds the following ops for sparse matrices using Eigen library: ```python add(a_csr, b_csr) add(a_csc, b_csc) addmm(c_csr, a_csr, b_csr) addmm(c_csr, a_csr, b_csc) addmm(c_csr, a_csc, b_csc) addmm(c_csr, a_csc, b_csr) addmm(c_csc, a_csr, b_csr) addmm(c_csc, a_csr, b_csc) addmm(c_csc, a_csc, b_csc) addmm(c_csc, a_csc, b_csr) ``` Currently, the operations for sparse matrices on CPU are available through MKL only. The non-existence of MKL on `aarch64` causes the unavailability of these ops on any machines with ARM based CPUs, including Apple Silicon, AWS Graviton and NVIDIA Grace. This PR addresses this issue by using Eigen as a backend for the above ops. This is a re-factored version of my previous PR pytorch#101814. The main difference with the old one, this does not enable Eigen by default. Pull Request resolved: pytorch#155357 Approved by: https://github.com/pearu, https://github.com/eqy Co-authored-by: Eli Uriegas <eliuriegas@meta.com>
This pull request adds the following ops for sparse matrices using Eigen library:
Currently, the operations for sparse matrices on CPU are available through MKL only. The non-existence of MKL on
aarch64causes the unavailability of these ops on any machines with ARM based CPUs, including Apple Silicon, AWS Graviton and NVIDIA Grace. This PR addresses this issue by using Eigen as a backend for the above ops.This is a re-factored version of my previous PR #101814. The main difference with the old one, this does not enable Eigen by default.