As of 2.6, PyTorch is building and testing with CUDA 12.4 and 12.6: pytorch/pytorch#138609
We have tests that interop PyTorch/XLA GPU with a CUDA PyTorch build:
|
def onlyIfTorchSupportsCUDA(fn): |
|
return unittest.skipIf( |
|
not torch.cuda.is_available(), reason="requires PyTorch CUDA support")( |
|
fn) |
Therefore, it's good to avoid CUDA version skew between PyTorch/XLA and upstream PyTorch.
As of 2.6, PyTorch is building and testing with CUDA 12.4 and 12.6: pytorch/pytorch#138609
We have tests that interop PyTorch/XLA GPU with a CUDA PyTorch build:
xla/test/test_operations.py
Lines 165 to 168 in 065cb5b
Therefore, it's good to avoid CUDA version skew between PyTorch/XLA and upstream PyTorch.