Skip to content

torch.remainder gives a remainder larger than the divisor #37743

@KeAWang

Description

@KeAWang

🐛 Bug

torch.remainder is wrong on some inputs. It's accurate when computing in float64, but the remainder can be larger than the divisor in float32. What's worse is that the wrong result is different on CPU vs GPU

To Reproduce

import torch
import math
import numpy as np

n = 1e9

torch.remainder(torch.tensor([n], device="cpu", dtype=torch.float32), math.pi)  # will return tensor([-64.]) which is larger than the divisor math.pi

torch.remainder(torch.tensor([n], device="cuda", dtype=torch.float32), math.pi)  # will return tensor([-33.5333], device='cuda:0') which is also larger than the divisor math.pi

Expected behavior

np.remainder(np.array([n], dtype=np.float32), math.pi)  # will return array([1.024195], dtype=float32)

Some other observations:

  • The results of torch.fmod matches the results of np.fmod in both GPU and CPU in fp32 and fp64
  • Since by default x % y will call torch.remainder(x, y) if x and y are PyTorch Tensors, wrong behavior in torch.remainder also results in wrong behavior of %.

Environment

PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.2

OS: EndeavourOS Linux
GCC version: (GCC) 8.4.0
CMake version: version 3.17.0

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 440.82
cuDNN version: /usr/lib/libcudnn.so.7.6.5

Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.7.5
[pip] torch==1.5.0
[pip] torchcontrib==0.0.2
[pip] torchdiffeq==0.0.1
[pip] torchvision==0.6.0a0+82fd1c8
[conda] blas                      2.15                        mkl    conda-forge
[conda] cudatoolkit               10.2.89              hfd86e86_0  
[conda] libblas                   3.8.0                    15_mkl    conda-forge
[conda] libcblas                  3.8.0                    15_mkl    conda-forge
[conda] liblapack                 3.8.0                    15_mkl    conda-forge
[conda] liblapacke                3.8.0                    15_mkl    conda-forge
[conda] mkl                       2020.0                      166    conda-forge
[conda] numpy                     1.18.1           py37h8960a57_1    conda-forge
[conda] pytorch                   1.5.0           py3.7_cuda10.2.89_cudnn7.6.5_0    pytorch
[conda] pytorch-lightning         0.7.5                     dev_0    <develop>
[conda] torchcontrib              0.0.2                    pypi_0    pypi
[conda] torchdiffeq               0.0.1                    pypi_0    pypi
[conda] torchvision               0.6.0                py37_cu102    pytorch

cc @ezyang @gchanan @zou3519

Metadata

Metadata

Assignees

Labels

high prioritymodule: numerical-stabilityProblems related to numerical stability of operationstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions