Sparse CSR CPU: add torch.addmm#65606
Sparse CSR CPU: add torch.addmm#65606IvanYashchuk wants to merge 49 commits intogh/ivanyashchuk/39/basefrom
torch.addmm#65606Conversation
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. [ghstack-poisoned]
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 26f14d4 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch @IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch @IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. ghstack-source-id: 3fe22bb Pull Request resolved: pytorch#65606
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. ghstack-source-id: b3b97ff Pull Request resolved: pytorch#65606
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. ghstack-source-id: d9121db Pull Request resolved: pytorch#65606
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. ghstack-source-id: b98efad Pull Request resolved: pytorch#65606
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. ghstack-source-id: 08b11dd Pull Request resolved: pytorch#65606
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. ghstack-source-id: 488b127 Pull Request resolved: pytorch#65606
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk [ghstack-poisoned]
| * `tensor` - 2D strided Tensor. | ||
| * `row_major` - controls the memory layout. | ||
| */ | ||
| c10::MaybeOwned<Tensor> prepare_dense_matrix_for_mkl( |
There was a problem hiding this comment.
Why do you need this overload and the one above? Is it for n-dim Tensor support?
There was a problem hiding this comment.
The one above returns either row or column-major matrix, it doesn't matter. This overload returns the matrix in a specified layout.
|
|
||
| #if !AT_MKL_ENABLED() | ||
| if (mat2.is_sparse_csr() && result.is_sparse_csr()) { | ||
| TORCH_CHECK(false, "Calling addmm on sparse CPU tensors requires compiling PyTorch with MKL. Please use PyTorch built with MKL."); |
There was a problem hiding this comment.
nit: I know this is using AT_MKL_ENABLED instead of AT_USE_MKL_SPARSE, but for this particular condition, shouldn't the error message be the same (i.e. it needs MKL and linux)?
There was a problem hiding this comment.
Right, use AT_USE_MKL_SPARSE should probably be used here instead of AT_MKL_ENABLED.
|
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk Differential Revision: [D32366236](https://our.internmc.facebook.com/intern/diff/D32366236) [ghstack-poisoned]
This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. ghstack-source-id: b90d92a Pull Request resolved: pytorch#65606
|
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Pull Request resolved: pytorch#65606 This PR adds `torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)` variant with `a, b, c, out` all being sparse CSR tensors on CPU. cc nikitaved pearu cpuhrsch IvanYashchuk Test Plan: Imported from OSS Reviewed By: mrshenli Differential Revision: D32366236 Pulled By: cpuhrsch fbshipit-source-id: e910bcc96eee99d624b80ee881df3887ab3ba5ac
Stack from ghstack:
triangular_solve_out#62180torch.addmm#65606This PR adds
torch.addmm(c, a, b, alpha=1.0, beta=0.0, out=out)variant witha, b, c, outall being sparse CSR tensors on CPU.cc @nikitaved @pearu @cpuhrsch @IvanYashchuk
Differential Revision: D32366236