Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault#11205
Closed
Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault#11205
Conversation
Differential Revision: D9557315 Differential Version: 56407314
Differential Revision: D9557315 Differential Version: 56413294
Differential Revision: D9557315 Differential Version: 56415719
Differential Revision: D9557315 Differential Version: 56424143
Differential Revision: D9557315 Differential Version: 56437859
Differential Revision: D9561478 Differential Version: 56437861
Differential Revision: D9561478 Differential Version: 56440382
Differential Revision: D9561478 Differential Version: 56440854
Differential Revision: D9562197 Differential Version: 56442349
Differential Revision: D9562312 Differential Version: 56443146
Differential Revision: D9562312 Differential Version: 5644335
Differential Revision: D9562467 Differential Version: 56443875
Differential Revision: D9562467 Differential Version: 56445436
Differential Revision: D9562312 Differential Version: 56445886
Differential Revision: D9557315 Differential Version: 56444203
Differential Revision: D9561478 Differential Version: 56446440
Differential Revision: D9562467 Differential Version: 56447016
Differential Revision: D9562467 Differential Version: 56447216
Differential Revision: D9561478 Differential Version: 56449391
Differential Revision: D9564206 Differential Version: 56452969
Differential Revision: D9562312 Differential Version: 56453321
Differential Revision: D9564516 Differential Version: 56455053
Differential Revision: D9562467 Differential Version: 56473044
Differential Revision: D9562197 Differential Version: 56473286
Differential Revision: D9578398 Differential Version: 56517363
Differential Revision: D9578399 Differential Version: 56517362
Differential Revision: D9578734 Differential Version: 56519039
Differential Revision: D9578734 Differential Version: 56520207
Differential Revision: D9578734 Differential Version: 56526151
Differential Revision: D9581560 Differential Version: 56526196
Differential Revision: D9614321 Differential Version: 56803229
Differential Revision: D9631619 Differential Version: 56803477
ssnl
reviewed
Sep 3, 2018
| bool is_sparse() const override { | ||
| return backend() == Backend::SparseCPU || backend() == Backend::SparseCUDA; | ||
| } | ||
| bool is_distributed() const override { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Differential Revision: D9628281 Differential Version: 56805602
Differential Revision: D9614321 Differential Version: 56805604
Differential Revision: D9631619 Differential Version: 56805603
Differential Revision: D9631619 Differential Version: 56809675
Differential Revision: D9631619 Differential Version: 56834201
Contributor
Author
|
@pytorchbot retest this please |
5 similar comments
Contributor
Author
|
@pytorchbot retest this please |
Contributor
Author
|
@pytorchbot retest this please |
Contributor
Author
|
@pytorchbot retest this please |
Contributor
Author
|
@pytorchbot retest this please |
Contributor
Author
|
@pytorchbot retest this please |
Differential Revision: D9631619 Differential Version: 56884928
zdevito
pushed a commit
to zdevito/ATen
that referenced
this pull request
Sep 5, 2018
Summary: Pull Request resolved: pytorch/pytorch#11205 Our short term plan for supporting out of tree complex development requires an external library to add a custom subclass of Type without access to the code generation facilities in ATen. This commit reorganizes Type so as to minimize the amount of boilerplate you have to write when making a subclass of Type. In particular, it: - Creates a new CPUTypeDefault/CUDATypeDefault class, which you are intended to inherit from, which provides default implementations of CPU/CUDA that is layout/dtype agnostic. - Adds new getCPUAllocator() and getCUDAAllocator() functions, as a more public API to get your hands on Allocator - Adds allocator() and getDeviceFromPtr(), abstracting the device specific parts of storage() methods; these methods are now implemented in base TypeDefault. - Delete the static typeString() method, which is now dead. - Move is_cuda/is_sparse/is_distributed to TypeDefault. Reviewed By: SsnL Differential Revision: D9631619 fbshipit-source-id: 40b600d99691230e36e03eb56434c351cbc2aa3a
petrex
pushed a commit
to petrex/pytorch
that referenced
this pull request
Sep 5, 2018
resolve conflict in data parallel model * master: (201 commits) Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744) Move collapse dims into a single place (pytorch#11272) Fix some more warnings (pytorch#11257) Fix the batchnorm onnx exporting when affine=False Improve error message to include return types too (pytorch#11245) Check doxygen output in travis (pytorch#11124) Accept more numpy scalars as doubles (pytorch#9659) Fixed log message (pytorch#10874) Fix to distribution.__repr__ with lazy attributes (pytorch#11263) Add import export step to end to end tests Add complex hooks for out of tree complex implementation. (pytorch#11216) Unify opt flag for cmake codegen (pytorch#11227) nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127) Port PackedSequences functions to C++ (pytorch#11224) Treat numerical differences as warnings instead of errors when tracing (pytorch#11246) add a Float16UniformFill (pytorch#11123) Implement torch.tensordot (pytorch#10025) keep net type info when generating model complete net (pytorch#11032) Get rid of some uses of type() (pytorch#11215) Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205) ...
PenghuiCheng
pushed a commit
to PenghuiCheng/pytorch
that referenced
this pull request
Sep 11, 2018
…ch#11205) Summary: Pull Request resolved: pytorch#11205 Our short term plan for supporting out of tree complex development requires an external library to add a custom subclass of Type without access to the code generation facilities in ATen. This commit reorganizes Type so as to minimize the amount of boilerplate you have to write when making a subclass of Type. In particular, it: - Creates a new CPUTypeDefault/CUDATypeDefault class, which you are intended to inherit from, which provides default implementations of CPU/CUDA that is layout/dtype agnostic. - Adds new getCPUAllocator() and getCUDAAllocator() functions, as a more public API to get your hands on Allocator - Adds allocator() and getDeviceFromPtr(), abstracting the device specific parts of storage() methods; these methods are now implemented in base TypeDefault. - Delete the static typeString() method, which is now dead. - Move is_cuda/is_sparse/is_distributed to TypeDefault. Reviewed By: SsnL Differential Revision: D9631619 fbshipit-source-id: 40b600d99691230e36e03eb56434c351cbc2aa3a
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault
Our short term plan for supporting out of tree complex development requires an
external library to add a custom subclass of Type without access to the
code generation facilities in ATen. This commit reorganizes Type so
as to minimize the amount of boilerplate you have to write when making
a subclass of Type.
In particular, it:
intended to inherit from, which provides default implementations
of CPU/CUDA that is layout/dtype agnostic.
a more public API to get your hands on Allocator
specific parts of storage() methods; these methods are now
implemented in base TypeDefault.
Differential Revision: D9631619