Skip to content

Unify THStorage and THCStorage structs. (#9087)#9107

Closed
ezyang wants to merge 1 commit intopytorch:masterfrom
ezyang:export-D8712072
Closed

Unify THStorage and THCStorage structs. (#9087)#9107
ezyang wants to merge 1 commit intopytorch:masterfrom
ezyang:export-D8712072

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Jul 2, 2018

Summary:
Some details about how this was done:

  • For now, the allocators for CPU and CUDA are different (unifying
    the allocators is a bigger change to make, I'll contribute this in
    a later patch). To smooth this over, the allocator field now
    stores a void* instead of THAllocator* or THCDeviceAllocator*; to
    make this clear the field is renamed to allocatorVoidPtr.

  • Some THStorage functions which were generated per-scalar are now
    generalized, and thus moved out of the generic/ library. This way
    they can be called directly from a non-code-generated at::Storage

  • THCState is moved into a C++ header. This is actually not really
    related to this particular diff, but I'll need it soon to replace
    THAllocator/THCDeviceAllocator with at::Allocator (C++, so I can't
    mention it in a C header file.)

  • THPPointer needs to be adjusted, since there is no more type refinement
    between THStorage/THCStorage for it to template match over. This
    is a little tricky, because I can't refer to THCStorage_free unless
    we actually compile with CUDA. So there's two copies of the function
    now: one for the CPU build, one for the CUDA build. If we ever split
    CUDA/non-CUDA Python builds, you will have to indirect this through some
    dynamic dispatch.

I want to soon replace the THCDeviceAllocator pointers in
THCState with at::Allocator, but I can't reference a C++ namespaced type
from C code, so THCState needs to move.

Signed-off-by: Edward Z. Yang ezyang@fb.com
Closes #9087

Differential Revision: D8712072

Summary:
Closes pytorch#9107

Some details about how this was done:

- For now, the allocators for CPU and CUDA are different (unifying
  the allocators is a bigger change to make, I'll contribute this in
  a later patch).  To smooth this over, the allocator field now
  stores a void* instead of THAllocator* or THCDeviceAllocator*; to
  make this clear the field is renamed to allocatorVoidPtr.

- Some THStorage functions which were generated per-scalar are now
  generalized, and thus moved out of the generic/ library.  This way
  they can be called directly from a non-code-generated at::Storage

- THCState is moved into a C++ header.  This is actually not really
  related to this particular diff, but I'll need it soon to replace
  THAllocator/THCDeviceAllocator with at::Allocator (C++, so I can't
  mention it in a C header file.)

- THPPointer needs to be adjusted, since there is no more type refinement
  between THStorage/THCStorage for it to template match over.  This
  is a little tricky, because I can't refer to THCStorage_free unless
  we actually compile with CUDA.  So there's two copies of the function
  now: one for the CPU build, one for the CUDA build.  If we ever split
  CUDA/non-CUDA Python builds, you will have to indirect this through some
  dynamic dispatch.

I want to soon replace the THCDeviceAllocator pointers in
THCState with at::Allocator, but I can't reference a C++ namespaced type
from C code, so THCState needs to move.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Closes pytorch#9087

Differential Revision: D8712072

fbshipit-source-id: 6f2619d0c76f71e3de4b5b73b81910bf8ba3265b
@ezyang ezyang force-pushed the export-D8712072 branch from 76ad9c6 to 9bdfcb5 Compare July 2, 2018 19:10
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 3, 2018
Summary:
Closes pytorch/pytorch#9107

Some details about how this was done:

- For now, the allocators for CPU and CUDA are different (unifying
  the allocators is a bigger change to make, I'll contribute this in
  a later patch).  To smooth this over, the allocator field now
  stores a void* instead of THAllocator* or THCDeviceAllocator*; to
  make this clear the field is renamed to allocatorVoidPtr.

- Some THStorage functions which were generated per-scalar are now
  generalized, and thus moved out of the generic/ library.  This way
  they can be called directly from a non-code-generated at::Storage

- THCState is moved into a C++ header.  This is actually not really
  related to this particular diff, but I'll need it soon to replace
  THAllocator/THCDeviceAllocator with at::Allocator (C++, so I can't
  mention it in a C header file.)

- THPPointer needs to be adjusted, since there is no more type refinement
  between THStorage/THCStorage for it to template match over.  This
  is a little tricky, because I can't refer to THCStorage_free unless
  we actually compile with CUDA.  So there's two copies of the function
  now: one for the CPU build, one for the CUDA build.  If we ever split
  CUDA/non-CUDA Python builds, you will have to indirect this through some
  dynamic dispatch.

I want to soon replace the THCDeviceAllocator pointers in
THCState with at::Allocator, but I can't reference a C++ namespaced type
from C code, so THCState needs to move.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Closes pytorch/pytorch#9087

Reviewed By: orionr

Differential Revision: D8712072

Pulled By: ezyang

fbshipit-source-id: c6e1ea236cd1df017b42a7fffb2dbff20d50a284
pietern added a commit to pietern/pytorch that referenced this pull request Jul 4, 2018
This file was added in pytorch#9107 but wasn't installed. The libraries in
./torch/lib use the headers from Caffe2/ATen from their temporary
install path at torch/lib/tmp_install, and c10d was not able to find
THC/THCGeneral.hpp before this fix.
facebook-github-bot pushed a commit that referenced this pull request Jul 4, 2018
Summary:
This file was added in #9107 but wasn't installed. The libraries in
./torch/lib use the headers from Caffe2/ATen from their temporary
install path at torch/lib/tmp_install, and c10d was not able to find
THC/THCGeneral.hpp before this fix.
Closes #9159

Reviewed By: Yangqing

Differential Revision: D8731107

Pulled By: pietern

fbshipit-source-id: d6009f6f6e8e6e0f37dea24cc4c3570736943ab1
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 13, 2018
Summary:
Closes pytorch/pytorch#9107

Some details about how this was done:

- For now, the allocators for CPU and CUDA are different (unifying
  the allocators is a bigger change to make, I'll contribute this in
  a later patch).  To smooth this over, the allocator field now
  stores a void* instead of THAllocator* or THCDeviceAllocator*; to
  make this clear the field is renamed to allocatorVoidPtr.

- Some THStorage functions which were generated per-scalar are now
  generalized, and thus moved out of the generic/ library.  This way
  they can be called directly from a non-code-generated at::Storage

- THCState is moved into a C++ header.  This is actually not really
  related to this particular diff, but I'll need it soon to replace
  THAllocator/THCDeviceAllocator with at::Allocator (C++, so I can't
  mention it in a C header file.)

- THPPointer needs to be adjusted, since there is no more type refinement
  between THStorage/THCStorage for it to template match over.  This
  is a little tricky, because I can't refer to THCStorage_free unless
  we actually compile with CUDA.  So there's two copies of the function
  now: one for the CPU build, one for the CUDA build.  If we ever split
  CUDA/non-CUDA Python builds, you will have to indirect this through some
  dynamic dispatch.

I want to soon replace the THCDeviceAllocator pointers in
THCState with at::Allocator, but I can't reference a C++ namespaced type
from C code, so THCState needs to move.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Closes pytorch/pytorch#9087

Reviewed By: orionr

Differential Revision: D8712072

Pulled By: ezyang

fbshipit-source-id: c6e1ea236cd1df017b42a7fffb2dbff20d50a284
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
Closes pytorch#9107

Some details about how this was done:

- For now, the allocators for CPU and CUDA are different (unifying
  the allocators is a bigger change to make, I'll contribute this in
  a later patch).  To smooth this over, the allocator field now
  stores a void* instead of THAllocator* or THCDeviceAllocator*; to
  make this clear the field is renamed to allocatorVoidPtr.

- Some THStorage functions which were generated per-scalar are now
  generalized, and thus moved out of the generic/ library.  This way
  they can be called directly from a non-code-generated at::Storage

- THCState is moved into a C++ header.  This is actually not really
  related to this particular diff, but I'll need it soon to replace
  THAllocator/THCDeviceAllocator with at::Allocator (C++, so I can't
  mention it in a C header file.)

- THPPointer needs to be adjusted, since there is no more type refinement
  between THStorage/THCStorage for it to template match over.  This
  is a little tricky, because I can't refer to THCStorage_free unless
  we actually compile with CUDA.  So there's two copies of the function
  now: one for the CPU build, one for the CUDA build.  If we ever split
  CUDA/non-CUDA Python builds, you will have to indirect this through some
  dynamic dispatch.

I want to soon replace the THCDeviceAllocator pointers in
THCState with at::Allocator, but I can't reference a C++ namespaced type
from C code, so THCState needs to move.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Closes pytorch#9087

Reviewed By: orionr

Differential Revision: D8712072

Pulled By: ezyang

fbshipit-source-id: c6e1ea236cd1df017b42a7fffb2dbff20d50a284
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
This file was added in pytorch#9107 but wasn't installed. The libraries in
./torch/lib use the headers from Caffe2/ATen from their temporary
install path at torch/lib/tmp_install, and c10d was not able to find
THC/THCGeneral.hpp before this fix.
Closes pytorch#9159

Reviewed By: Yangqing

Differential Revision: D8731107

Pulled By: pietern

fbshipit-source-id: d6009f6f6e8e6e0f37dea24cc4c3570736943ab1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant