Skip to content

Multidimensional CUDA irfft modifies input #34551

@ar4

Description

@ar4

🐛 Bug

torch.irfft modifies the input Tensor for the multidimensional case on a GPU

To Reproduce

Steps to reproduce the behavior:

import torch
t = torch.ones(2,2,2).cuda()
t_backup = t.clone()
torch.irfft(t, 2, signal_sizes=(2,2))
print((t-t_backup).abs().sum().item()) # should be 0, but is 8

Expected behavior

The input should not be modified

Environment

PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1

OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.12.0

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 418.67
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5

Versions of relevant libraries:
[pip3] numpy==1.17.5
[pip3] torch==1.4.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.3.1
[pip3] torchvision==0.5.0
[conda] Could not collect

cc @ezyang @gchanan @zou3519 @ngimel

Metadata

Metadata

Assignees

Labels

high prioritymodule: cudaRelated to torch.cuda, and CUDA support in generalmodule: numpyRelated to numpy support, and also numpy compatibility of our operatorstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions