Feature request
I would like to be able to construct a cudf DataFrame from a NumPy-like GPU array, perhaps coming from Numba, CuPy, PyTorch, or TensorFlow. What is the right way to support this?
Alternatively, if I want to move a cudf Series to a CuPy array, or work with it from Numba, how can one do this?
One approach was defined and taken in cupy/cupy#1144 where @seibert added a __cuda_array_interface__ protocol to cupy, enabling Numba-defined GPU functions to work on that data directly. I'm curious what it would take to support the same within cudf (Numba working on cudf objects), and also to allow cudf and cupy to share data between each other.
I suspect that there are challenges here, but thought I'd go ahead and ask.
Feature request
I would like to be able to construct a cudf DataFrame from a NumPy-like GPU array, perhaps coming from Numba, CuPy, PyTorch, or TensorFlow. What is the right way to support this?
Alternatively, if I want to move a cudf Series to a CuPy array, or work with it from Numba, how can one do this?
One approach was defined and taken in cupy/cupy#1144 where @seibert added a
__cuda_array_interface__protocol to cupy, enabling Numba-defined GPU functions to work on that data directly. I'm curious what it would take to support the same within cudf (Numba working on cudf objects), and also to allow cudf and cupy to share data between each other.I suspect that there are challenges here, but thought I'd go ahead and ask.