-
Notifications
You must be signed in to change notification settings - Fork 27.7k
Missing ATen/native/cuda headers? #40784
Copy link
Copy link
Closed
Labels
module: binariesAnything related to official binaries that we release to usersAnything related to official binaries that we release to usersmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Metadata
Metadata
Assignees
Labels
module: binariesAnything related to official binaries that we release to usersAnything related to official binaries that we release to usersmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
I'm just curious if there is any particular reason for not including the ATen native cuda headers?
pytorch/aten/src/ATen/CMakeLists.txt
Line 371 in a6a31bc
I was toying around with fusing some element-wise operations in the TorchScript+libtorch tandem. So, I stumbled over CUDALoops.cuh etc. for some quick fusing tests.
But unfortunately, they didn't get "installed".
Thanks!
cc @ezyang @seemethere @malfet @ngimel