-
Notifications
You must be signed in to change notification settings - Fork 27.2k
Description
🐛 Bug
torch.max gives a segmentation fault when the device types of the input tensor has a CPU device type and one or both of the out tensors(values, indices) are CUDA tensors.
To Reproduce
Steps to reproduce the behavior:
import torch
a=torch.randn(10)
values=torch.randn(10).cuda()
indices=torch.LongTensor().cuda()
torch.max(a, 0, out=(values, indices))
Segmentation fault (core dumped)
Expected behavior
Ideally, there should be a RuntimeError with a clear error message that max is not supported for input and out tensors with different device type
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
PyTorch version: 1.5.0a0+ce3e151
Is debug build: Yes
CUDA used to build PyTorch: 9.2
OS: CentOS Linux 7 (Core)
GCC version: (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla M40
GPU 1: Tesla M40
Nvidia driver version: 396.69
cuDNN version: /usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn.so.7.1.2
Versions of relevant libraries:
[pip] numpy==1.17.4
[pip] torch==1.5.0a0+ce3e151
[pip] torchvision==0.4.2
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-include 2019.4 243
[conda] mkl-service 2.3.0 py36he904b0f_0
[conda] mkl_fft 1.0.15 py36ha843d7b_0
[conda] mkl_random 1.1.0 py36hd6b4f25_0
[conda] torch 1.5.0a0+ce3e151 dev_0
[conda] torchvision 0.4.2 pypi_0 pypi