Skip to content

[JIT] addmm differs from eager mode #71784

@davidberard98

Description

@davidberard98

🐛 Describe the bug

This was found in opinfo tests.

Repro:

import torch
from torch import tensor

dtype = torch.float16

def f(add, a, b):
    return torch.addmm(add, a, b)

data = []

data.append((
    tensor([-7.2227], dtype=torch.float16),
    tensor([[ 0.7109, -4.8516], [-0.6484, 2.6719]], dtype=torch.float16),
    tensor([[-3.6816, -7.9023, -3.7266], [1.0547,  4.1719, -1.1328]], dtype=torch.float16),
))

use_trace = False

for (add, a, b) in data:
    x = torch.addmm(add, a, b)
    if use_trace:
        # here, torch.jit.trace throws a warning about output not being correct
        sf = torch.jit.trace(f, [add, a, b])
        y = torch.addmm(add, a, b)
    else:
        sf = torch.jit.script(f)
        y = sf(add, a, b)
    print(sf.graph)
    print("X: ", x)
    print("Y: ", y)
    print(x-y)
    assert((x==y).all().item())
    print()

Output:

graph(%add.1 : Tensor,
      %a.1 : Tensor,
      %b.1 : Tensor):
  %8 : int = prim::Constant[value=1]()
  %10 : Tensor = aten::addmm(%add.1, %a.1, %b.1, %8, %8) # /data/sandcastle/boxes/fbsource/fbcode/buck-out/opt/gen/scripts/dberard/matmul#link-tree/scripts/dberard/matmul.py:7:11
  return (%10)

X:  tensor([[-14.9609, -33.0625,  -4.3789],
        [ -2.0176,   9.0469,  -7.8320]], dtype=torch.float16)
Y:  tensor([[-14.9531, -33.0625,  -4.3750],
        [ -2.0195,   9.0625,  -7.8359]], dtype=torch.float16)
tensor([[-0.0078,  0.0000, -0.0039],
        [ 0.0020, -0.0156,  0.0039]], dtype=torch.float16)
Traceback (most recent call last):
  File "/path/matmul.py", line 32, in <module>
    assert((x==y).all().item())
AssertionError

Versions

(non-gpu devserver)

Collecting environment information...
PyTorch version: 1.11.0a0+gitad36af8
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: CentOS Stream 8 (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-3)
Clang version: Could not collect
CMake version: version 3.19.6
Libc version: glibc-2.28

Python version: 3.9.6 (default, Aug 18 2021, 19:38:01)  [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.6.13-0_fbk19_hardened_6064_gabfd136bb69a-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.0
[pip3] torch==1.11.0a0+gitfcc5577
[conda] blas                      1.0                         mkl
[conda] mkl                       2021.3.0           h06a4308_520
[conda] mkl-include               2021.3.0           h06a4308_520
[conda] mkl-service               2.4.0            py39h7f8727e_0
[conda] mkl_fft                   1.3.0            py39h42c9631_2
[conda] mkl_random                1.2.2            py39h51133e4_0
[conda] mypy                      0.910              pyhd3eb1b0_0
[conda] mypy_extensions           0.4.3            py39h06a4308_0
[conda] numpy                     1.20.3           py39hf144106_0
[conda] numpy-base                1.20.3           py39h74d4b33_0
[conda] torch                     1.11.0a0+git33ab075          pypi_0    pypi

Metadata

Metadata

Assignees

No one assigned

    Labels

    oncall: jitAdd this issue/PR to JIT oncall triage queue

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions