[READY] Preserve sparse tensor shape and dim invariants, and add scalar tensor support#9279
[READY] Preserve sparse tensor shape and dim invariants, and add scalar tensor support#9279yf225 wants to merge 46 commits intopytorch:masterfrom
Conversation
aten/src/ATen/SparseTensorImpl.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_sparse.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
should the invariant description be changed also? pytorch/aten/src/ATen/SparseTensorImpl.h Lines 11 to 28 in 9ae77cc Oh, you've changed it already, nvm :) |
|
@gchanan @ezyang On a high level, I made the following changes:
After zero-dim size is enabled by default, we can search |
|
@pytorchbot retest this please |
2 similar comments
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
|
should we have a no-arg (or at least no indices, values, shape) constructor, e.g. |
aten/doc/Tensor.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot retest this please |
2 similar comments
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
I restarted the |
|
@gchanan If we add an no-arg constructor to torch.sparse_coo_tensor(), isn't that a bit confusing because torch.tensor() doesn't work? |
|
@li-roy that's a fair point. Perhaps just a size-based one is sufficient, then. |
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/SparseTensorImpl.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| x_dense.resize_as_(y_dense) | ||
| self.assertEqual(x.shape, y.shape) | ||
| self.assertEqual(x._sparseDims(), y._sparseDims()) | ||
| self.assertEqual(x._denseDims(), y._denseDims()) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
c1dc859 to
54a54df
Compare
| self._test_resize_shape([1, 1], [1, 2, 3], [2, 2, 3], | ||
| [1, 1], [1, 2, 3], [1, 2, 3]) | ||
|
|
||
| # 8. Shrink the size of some dense dimensions on a non-empty sparse tensor [Not Supported] |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_sparse.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@pytorchbot retest this please |
…r support (#9279) Summary: When 0-sized dimension support is added, we expect an empty sparse tensor to be a 1-dimensional tensor of size `[0]`, with `sparseDims == 1` and `denseDims == 0`. Also, we expect the following invariants to be preserved at all times: ``` _sparseDims + _denseDims = len(shape) _indices.shape: dimensionality: 2, shape: (_sparseDims, nnz) _values.shape: dimensionality: 1 + _denseDims. shape: (nnz, shape[_sparseDims:]) ``` This PR fixes various places where the invariants are not strictly enforced when 0-sized dimension support is enabled. Tested and `test_sparse.py` passes locally on both CPU and CUDA with the `USE_TH_SIZE_ZERO_DIM` flag. Pull Request resolved: pytorch/pytorch#9279 Differential Revision: D8936683 Pulled By: yf225 fbshipit-source-id: 12f5cd7f52233d3b26af6edc20b4cdee045bcb5e
…r support (pytorch#9279) Summary: When 0-sized dimension support is added, we expect an empty sparse tensor to be a 1-dimensional tensor of size `[0]`, with `sparseDims == 1` and `denseDims == 0`. Also, we expect the following invariants to be preserved at all times: ``` _sparseDims + _denseDims = len(shape) _indices.shape: dimensionality: 2, shape: (_sparseDims, nnz) _values.shape: dimensionality: 1 + _denseDims. shape: (nnz, shape[_sparseDims:]) ``` This PR fixes various places where the invariants are not strictly enforced when 0-sized dimension support is enabled. Tested and `test_sparse.py` passes locally on both CPU and CUDA with the `USE_TH_SIZE_ZERO_DIM` flag. Pull Request resolved: pytorch#9279 Differential Revision: D8936683 Pulled By: yf225 fbshipit-source-id: 12f5cd7f52233d3b26af6edc20b4cdee045bcb5e
When 0-sized dimension support is added, we expect an empty sparse tensor to be a 1-dimensional tensor of size
[0], withsparseDims == 1anddenseDims == 0. Also, we expect the following invariants to be preserved at all times:This PR fixes various places where the invariants are not strictly enforced when 0-sized dimension support is enabled.
Tested and
test_sparse.pypasses locally on both CPU and CUDA with theUSE_TH_SIZE_ZERO_DIMflag.