🐛 Bug
torch.remainder gives the wrong output for very large float dividends on CPU, could be the float version of #5875
To Reproduce
import torch
x = torch.tensor(2749682432.0)
q = 36
print(torch.remainder(x,q))
actual output is 128.0 whereas the correct output should be 20
More Information
x % q produces the same incorrect output
torch.fmod(x,q) produces the correct output (20)
- constructing the dividend as an integer (
torch.tensor(2749682432.0, dtype=torch.int), overflows the dividend making it -1545284864 and produces the correct output for that integer
This seems like the issue is likely overflow
Environment
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Fedora release 30 (Thirty)
GCC version: (GCC) 9.1.1 20190503 (Red Hat 9.1.1-1)
CMake version: Could not collect
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] pytorch-cpu 0.4.1 py37_cpu_1 pytorch
[conda] torchvision-cpu 0.2.1 py37_1 pytorch
Additional context
I might be able to make the fix myself, basing off of #5906 but does it still make sense to make the fix in TH if it will be ported to Aten soon anyways #24507 ? Or should I make the port + fix together?
🐛 Bug
torch.remaindergives the wrong output for very large float dividends on CPU, could be thefloatversion of #5875To Reproduce
actual output is
128.0whereas the correct output should be20More Information
x % qproduces the same incorrect outputtorch.fmod(x,q)produces the correct output (20)torch.tensor(2749682432.0, dtype=torch.int), overflows the dividend making it-1545284864and produces the correct output for that integerThis seems like the issue is likely overflow
Environment
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Fedora release 30 (Thirty)
GCC version: (GCC) 9.1.1 20190503 (Red Hat 9.1.1-1)
CMake version: Could not collect
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] pytorch-cpu 0.4.1 py37_cpu_1 pytorch
[conda] torchvision-cpu 0.2.1 py37_1 pytorch
Additional context
I might be able to make the fix myself, basing off of #5906 but does it still make sense to make the fix in TH if it will be ported to Aten soon anyways #24507 ? Or should I make the port + fix together?