Improve LegacyTypeDispatch to handle initialization correctly.#11331
Closed
ezyang wants to merge 9 commits intoexport-D9657449from
Closed
Improve LegacyTypeDispatch to handle initialization correctly.#11331ezyang wants to merge 9 commits intoexport-D9657449from
ezyang wants to merge 9 commits intoexport-D9657449from
Conversation
Differential Revision: D9666612 Differential Version: 57069499
Differential Revision: D9666612 Differential Version: 57070636
Differential Revision: D9666612 Differential Version: 57071083
Differential Revision: D9666612 Differential Version: 57071209
ezyang
added a commit
to ezyang/pytorch
that referenced
this pull request
Sep 6, 2018
…ch#11304) Summary: Pull Request resolved: pytorch#11304 Pull Request resolved: pytorch#11331 In the previous commit, we added a bare-bones LegacyTypeDispatch in ATen/core. This is not sufficient for the use cases we need: we not only need to be able to get a Type, but we also need to be able to *initialize* the Types if its the first time we have retrieved a CPU/CUDA/Complex type. I hemmed and hawed about how to do this; the strategy this PR takes is to introduce a new "hooks" interface specifically for initializing CPU/CUDA/Complex (which still lives in Context). We then move all "user-friendly" functions to LegacyTypeDispatch. Here were some other options which I considered, but don't work: - Assume that Type is already initialized, because we only intend to call Type from Tensor methods, where we already have a Tensor. This does not work because Caffe2 created tensors will not have gone through the standard Type codepath, and will have skipped initialization. - Move CUDAHooks and ComplexHooks to ATen/core. Besides being sucky, this isn't even a complete fix, because I still need to initialize CPU hooks (so you *still* need another hooks interface). Differential Revision: D9666612 fbshipit-source-id: d27c0f824aaf885236fae70594d9687079babad3
This was referenced Sep 6, 2018
cpuhrsch
reviewed
Sep 6, 2018
| const LegacyTypeInitInterface& getLegacyTypeInit() { | ||
| static std::unique_ptr<LegacyTypeInitInterface> legacy_type_init; | ||
| static std::once_flag once; | ||
| std::call_once(once, [] { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
cpuhrsch
reviewed
Sep 6, 2018
| detail::getVariableHooks().registerVariableTypeFor(this, b, s); | ||
| } | ||
| private: | ||
| void initBackendIfNeeded(DeviceType p) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Differential Revision: D9666612 Differential Version: 57150322
Contributor
|
did you look into how difficult it is to get caffe2 to do the initialization? |
Contributor
Author
|
Caffe2 mobile can't do the initialization, so I concluded that having Caffe2 handle it was not possible (unless you start ifdef'ing.) |
Differential Revision: D9666612 Differential Version: 57185689
Differential Revision: D9666612 Differential Version: 57185830
gchanan
reviewed
Sep 7, 2018
| detail::getVariableHooks().registerVariableTypeFor(this, b, s); | ||
| } | ||
| private: | ||
| void initBackendIfNeeded(DeviceType p) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
gchanan
approved these changes
Sep 7, 2018
Differential Revision: D9666612 Differential Version: 57246346
Differential Revision: D9666612 Differential Version: 57247570
zdevito
pushed a commit
to zdevito/ATen
that referenced
this pull request
Sep 8, 2018
Summary: Pull Request resolved: pytorch/pytorch#11331 In the previous commit, we added a bare-bones LegacyTypeDispatch in ATen/core. This is not sufficient for the use cases we need: we not only need to be able to get a Type, but we also need to be able to *initialize* the Types if its the first time we have retrieved a CPU/CUDA/Complex type. I hemmed and hawed about how to do this; the strategy this PR takes is to introduce a new "hooks" interface specifically for initializing CPU/CUDA/Complex (which still lives in Context). We then move all "user-friendly" functions to LegacyTypeDispatch. Here were some other options which I considered, but don't work: - Assume that Type is already initialized, because we only intend to call Type from Tensor methods, where we already have a Tensor. This does not work because Caffe2 created tensors will not have gone through the standard Type codepath, and will have skipped initialization. - Move CUDAHooks and ComplexHooks to ATen/core. Besides being sucky, this isn't even a complete fix, because I still need to initialize CPU hooks (so you *still* need another hooks interface). Reviewed By: cpuhrsch Differential Revision: D9666612 fbshipit-source-id: ac7004b230044b67d13caa81fdfaf3c6ab915e3f
PenghuiCheng
pushed a commit
to PenghuiCheng/pytorch
that referenced
this pull request
Sep 11, 2018
…ch#11331) Summary: Pull Request resolved: pytorch#11331 In the previous commit, we added a bare-bones LegacyTypeDispatch in ATen/core. This is not sufficient for the use cases we need: we not only need to be able to get a Type, but we also need to be able to *initialize* the Types if its the first time we have retrieved a CPU/CUDA/Complex type. I hemmed and hawed about how to do this; the strategy this PR takes is to introduce a new "hooks" interface specifically for initializing CPU/CUDA/Complex (which still lives in Context). We then move all "user-friendly" functions to LegacyTypeDispatch. Here were some other options which I considered, but don't work: - Assume that Type is already initialized, because we only intend to call Type from Tensor methods, where we already have a Tensor. This does not work because Caffe2 created tensors will not have gone through the standard Type codepath, and will have skipped initialization. - Move CUDAHooks and ComplexHooks to ATen/core. Besides being sucky, this isn't even a complete fix, because I still need to initialize CPU hooks (so you *still* need another hooks interface). Reviewed By: cpuhrsch Differential Revision: D9666612 fbshipit-source-id: ac7004b230044b67d13caa81fdfaf3c6ab915e3f
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack:
:white_circle: #11274 Move the type registry out of Context, into LegacyTypeDispatch. 💚
:black_circle: #11331 Improve LegacyTypeDispatch to handle initialization correctly. 💚
In the previous commit, we added a bare-bones LegacyTypeDispatch in ATen/core.
This is not sufficient for the use cases we need: we not only need to be able to
get a Type, but we also need to be able to initialize the Types if its the first time
we have retrieved a CPU/CUDA/Complex type. I hemmed and hawed about how
to do this; the strategy this PR takes is to introduce a new "hooks" interface
specifically for initializing CPU/CUDA/Complex (which still lives in Context). We then
move all "user-friendly" functions to LegacyTypeDispatch.
Here were some other options which I considered, but don't work:
from Tensor methods, where we already have a Tensor. This does not work
because Caffe2 created tensors will not have gone through the standard
Type codepath, and will have skipped initialization.
this isn't even a complete fix, because I still need to initialize CPU hooks
(so you still need another hooks interface).
Differential Revision: D9666612