Skip to content

Improve LegacyTypeDispatch to handle initialization correctly.#11304

Closed
ezyang wants to merge 7 commits intopytorch:export-D9657449from
ezyang:export-D9666612
Closed

Improve LegacyTypeDispatch to handle initialization correctly.#11304
ezyang wants to merge 7 commits intopytorch:export-D9657449from
ezyang:export-D9666612

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 5, 2018

Improve LegacyTypeDispatch to handle initialization correctly.

In the previous commit, we added a bare-bones LegacyTypeDispatch in ATen/core.
This is not sufficient for the use cases we need: we not only need to be able to
get a Type, but we also need to be able to initialize the Types if its the first time
we have retrieved a CPU/CUDA/Complex type. I hemmed and hawed about how
to do this; the strategy this PR takes is to introduce a new "hooks" interface
specifically for initializing CPU/CUDA/Complex (which still lives in Context). We then
move all "user-friendly" functions to LegacyTypeDispatch.

Here were some other options which I considered, but don't work:

  • Assume that Type is already initialized, because we only intend to call Type
    from Tensor methods, where we already have a Tensor. This does not work
    because Caffe2 created tensors will not have gone through the standard
    Type codepath, and will have skipped initialization.
  • Move CUDAHooks and ComplexHooks to ATen/core. Besides being sucky,
    this isn't even a complete fix, because I still need to initialize CPU hooks
    (so you still need another hooks interface).

Differential Revision: D9666612

Stack:

  1. Move TensorOptions.cpp to the correct place in ATen/core #11244 Move TensorOptions.cpp to the correct place in ATen/core
  2. Rename getMaybeVariableType back to getType. #11250 Rename getMaybeVariableType back to getType.
  3. Move ATen/Registry.h to ATen/core/Registry.h #11270 Move ATen/Registry.h to ATen/core/Registry.h
  4. Move VariableHooksInterface to ATen/core #11273 Move VariableHooksInterface to ATen/core
  5. Move the type registry out of Context, into LegacyTypeDispatch. #11274 Move the type registry out of Context, into LegacyTypeDispatch.
  6. 📍 Improve LegacyTypeDispatch to handle initialization correctly. #11304 Improve LegacyTypeDispatch to handle initialization correctly.

pytorch#11283)

Summary:
…n TensorImpl.
Pull Request resolved: pytorch#11283

Reviewed By: ezyang

Differential Revision: D9660015

Pulled By: gchanan

fbshipit-source-id: 263cba226d9ee981d55281c94e6fda5842a46b02
@ezyang ezyang force-pushed the export-D9666612 branch 2 times, most recently from 768e049 to 89bf56a Compare September 6, 2018 15:58
Summary: Also had to move OptionsGuard.

Differential Revision: D9646190

fbshipit-source-id: f7483455fbda014a8ea97880ed38232514ac2c7d
Summary:
```
codemod -d . --extensions cc,cpp,cu,cuh,h getMaybeVariableType getType
```

Differential Revision: D9648830

fbshipit-source-id: 96e454298747f0abbd75c583c9ab2974646cb013
Summary:
Still need to deduplicate this with caffe2/core/registry.h,
but this will be a bit tricky because the current formulation
of the macro is namespace sensitive (i.e., the macro for classes
defined in at:: namespace won't work if you call from caffe2::
namespace).

Differential Revision: D9654871

fbshipit-source-id: 6a5cb46dbb5b91fc8a1480494997de5b07a3bc3a
Summary:
This one might strike you as a bit surprising, but it's necessary
to expose this interface in ATen/core, because we need to be
able to get a true Variable type from Variable tensors, and
to do that we need to go through the hooks interface.

Differential Revision: D9656548

fbshipit-source-id: 53820fcd76c60fafeb751d33432db570459cd8cc
Summary:
We don't want to put all of Context into ATen/core, but one
particular part cannot be avoided: the type registry, because
implementations of TensorMethods will need to get a Type,
and then do a virtual call on it.

I needed to do a little bit of (temporary) footwork to get this
in without also moving Type, because unique_ptr<Type> expects
to be able to see the destructor of Type (but it's forward declared
right now).  So instead I put the destructor as an explicit functor.  We
can get rid of this once Type actually moves in ATen/core

Differential Revision: D9657449

fbshipit-source-id: ae83d80bd8893034fe5acf3e660b788162946602
…ch#11304)

Summary:
Pull Request resolved: pytorch#11304

Pull Request resolved: pytorch#11331

In the previous commit, we added a bare-bones LegacyTypeDispatch in ATen/core.
This is not sufficient for the use cases we need: we not only need to be able to
get a Type, but we also need to be able to *initialize* the Types if its the first time
we have retrieved a CPU/CUDA/Complex type. I hemmed and hawed about how
to do this; the strategy this PR takes is to introduce a new "hooks" interface
specifically for initializing CPU/CUDA/Complex (which still lives in Context). We then
move all "user-friendly" functions to LegacyTypeDispatch.

Here were some other options which I considered, but don't work:
- Assume that Type is already initialized, because we only intend to call Type
  from Tensor methods, where we already have a Tensor. This does not work
  because Caffe2 created tensors will not have gone through the standard
  Type codepath, and will have skipped initialization.
- Move CUDAHooks and ComplexHooks to ATen/core. Besides being sucky,
  this isn't even a complete fix, because I still need to initialize CPU hooks
  (so you *still* need another hooks interface).

Differential Revision: D9666612

fbshipit-source-id: d27c0f824aaf885236fae70594d9687079babad3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants