Remove conversion operator from Type to TensorOptions#17603
Closed
li-roy wants to merge 19 commits intoexport-D14381774from
Closed
Remove conversion operator from Type to TensorOptions#17603li-roy wants to merge 19 commits intoexport-D14381774from
li-roy wants to merge 19 commits intoexport-D14381774from
Conversation
Differential Revision: D14276588 Differential Version: 73837916
This was referenced Mar 1, 2019
Differential Revision: D14276588 Differential Version: 73946908
Differential Revision: D14276588 Differential Version: 73955527
added 4 commits
March 5, 2019 15:03
Differential Revision: D14276588 Differential Version: 74349067
Differential Revision: D14276588 Differential Version: 74395304
Differential Revision: D14276588 Differential Version: 74488006
Differential Revision: D14276588 Differential Version: 74496474
added 2 commits
March 6, 2019 16:33
Differential Revision: D14276588 Differential Version: 74537075
Differential Revision: D14276588 Differential Version: 74694652
This was referenced Mar 8, 2019
added 3 commits
March 7, 2019 18:19
Differential Revision: D14276588 Differential Version: 74711137
Differential Revision: D14276588 Differential Version: 74714417
Differential Revision: D14276588 Differential Version: 74716823
Differential Revision: D14276588 Differential Version: 74726652
gchanan
reviewed
Mar 8, 2019
| IntTensor _to_csr_int(const LongTensor& rowIndices, int64_t dim, int64_t nnz) { | ||
| IntTensor csr = at::empty({dim+1}, CUDA(kInt)); | ||
| IntTensor rowIndicesInt = at::empty({rowIndices.size(0)}, CUDA(kInt)); | ||
| IntTensor csr = at::empty({dim+1}, TensorOptions(kCPU).dtype(kInt)); |
Contributor
There was a problem hiding this comment.
didn't you change the types of these?
| int64_t nnz = sparse._nnz(); | ||
|
|
||
| LongTensor indices = at::empty({1, nnz}, CUDA(kLong)); | ||
| LongTensor indices = at::empty({1, nnz}, TensorOptions(kCPU).dtype(kLong)); |
| AT_CHECK(cuda::getApplyGrid(valueSize, grid, curDevice), "mul: Argument #0: tensor too large or too many dimensions"); | ||
|
|
||
| LongTensor resultNnz = at::empty({1}, CUDA(kLong)); | ||
| LongTensor resultNnz = at::empty({1}, TensorOptions(kCPU).dtype(kLong)); |
|
|
||
| void TestZeroDim(Type& type) { | ||
| Tensor a = at::scalar_tensor(4, type.options()); // rand(type, {1}); | ||
| void TestZeroDim(TensorOptions &options) { |
Contributor
There was a problem hiding this comment.
I don't think you need references here all over this file -- the issue was you can't have a value-type Type.
| // can't expand empty tensor | ||
| void TestEmptyTensor(Type& T) { | ||
| auto empty = randn({0}, T); | ||
| void TestEmptyTensor(TensorOptions& options) { |
|
|
||
| test(CPU(kFloat), CPU(kDouble)); | ||
| auto options = device(kCPU).dtype(kFloat); | ||
| test(CPU(kFloat), options, CPU(kDouble)); |
Contributor
There was a problem hiding this comment.
I don't think this makes sense.
| if (at::hasCUDA()) { | ||
| test(CUDA(kFloat), CUDA(kDouble)); | ||
| auto options = device(kCUDA).dtype(kFloat); | ||
| test(CUDA(kFloat), options, CUDA(kDouble)); |
| } | ||
|
|
||
| void test(Type &T) { | ||
| void test(TensorOptions &options) { |
| using namespace at; | ||
| void TestSimpleCase(Type& T) { | ||
| auto a = randn({2, 3, 4, 5}, T); | ||
| void TestSimpleCase(TensorOptions& options) { |
| REQUIRE_OPTIONS(kCPU, -1, kInt, kStrided); | ||
|
|
||
| options = TensorOptions(getNonVariableType(Backend::SparseCPU, kFloat)); | ||
| options = TensorOptions(kCPU).dtype(kFloat).layout(kSparse); |
Contributor
There was a problem hiding this comment.
nit: how come sometimes you do TensorOptions(kCPU) and other times device(kCPU) to start?
gchanan
reviewed
Mar 8, 2019
| TypeAndSize(const Tensor & t) | ||
| : sizes(t.sizes().vec()) | ||
| , type(&t.type()) {} | ||
| , backend(t.type().backend()) |
added 6 commits
March 8, 2019 13:11
Differential Revision: D14276588 Differential Version: 74808628
Differential Revision: D14276588 Differential Version: 74811464
Differential Revision: D14276588 Differential Version: 74818456
Differential Revision: D14276588 Differential Version: 74831201
Differential Revision: D14276588 Differential Version: 74843008
Differential Revision: D14276588 Differential Version: 74983512
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack:
:white_circle: #17530 Small clean up of aten_op 💚
:white_circle: #17601 Store ScalarType and Backend instead of Type in TensorIterator 💚
:white_circle: #17785 Remove Type::elementSizeInBytes 💚
:white_circle: #17723 Store python default type as PyTensorType instead of at::Type 💚
:white_circle: #17786 Pass ScalarType separately from Type in python constructors 💚
:white_circle: #17792 Remove Type::ScalarType() 💚
:black_circle: #17603 Remove conversion operator from Type to TensorOptions 💛
:white_circle: #17787 Add ScalarType arg to Type::options() 💛
Differential Revision: D14276588