- Are you willing to contribute it (Yes/No):
Not me personally, but perhaps someone I work with
Describe the feature and the current behavior/state.
A number of GPU array computing libraries in Python (Numba, CuPy, PyTorch, RAPIDS) support the __cuda_array_interface__, protocol as described in the Numba documentation. This protocol helps to migrate data between different GPU array computing systems without explicit coordination.
x = lib_1.create_array()
y = lib_2.as_array(x)
or more concretely
t = torch.Tensor(...)
x = tensorflow.convert_to_tensor(t)
This involves two changes to TensorFlow
- We would add a property
__cuda_array_interface__ to tensor objects backed by a GPU that provided information about the GPU memory.
- We would add checks in functions designed to convert external objects into TensorFlow tensors to check for this attribute and use it if present.
Will this change the current api? How?
Only in a backwards compatible way. It will add a new property __cuda_array_interface__ to tensor objects, an will also add a check for these objects in functions designed to convert other objects into TensorFlow tensors.
Who will benefit with this feature?
This typically already exists on the CPU side with protocols like __array__, but on the GPU side things are still manual. Protocols like this make it easier both to use these array computing libraries together, and also to make external systems that are generic and can interoperate with a number of different array computing libraries.
Any Other info.
Relevant issues in other libraries:
cc @seibert
Not me personally, but perhaps someone I work with
Describe the feature and the current behavior/state.
A number of GPU array computing libraries in Python (Numba, CuPy, PyTorch, RAPIDS) support the
__cuda_array_interface__, protocol as described in the Numba documentation. This protocol helps to migrate data between different GPU array computing systems without explicit coordination.or more concretely
This involves two changes to TensorFlow
__cuda_array_interface__to tensor objects backed by a GPU that provided information about the GPU memory.Will this change the current api? How?
Only in a backwards compatible way. It will add a new property
__cuda_array_interface__to tensor objects, an will also add a check for these objects in functions designed to convert other objects into TensorFlow tensors.Who will benefit with this feature?
This typically already exists on the CPU side with protocols like
__array__, but on the GPU side things are still manual. Protocols like this make it easier both to use these array computing libraries together, and also to make external systems that are generic and can interoperate with a number of different array computing libraries.Any Other info.
Relevant issues in other libraries:
__cuda_array_interface__cupy/cupy#1144cc @seibert