TL;DR
I've reported this initially as bug, but it turned out to be more of documentation issue. I'll leave my original comment below.
🐛 Bug
Type promotion is broken for binary ops with uint8 in some edge cases.
To Reproduce
import torch
a = torch.tensor([0], dtype=torch.uint8)
b = torch.tensor(0)
for dtype in (torch.int16, torch.int32, torch.int64):
print(
f"{torch.add(a, b.to(dtype)).dtype} "
f"vs. "
f"{torch.promote_types(torch.uint8, dtype)}"
)
torch.uint8 vs. torch.int16
torch.uint8 vs. torch.int32
torch.uint8 vs. torch.int64
- There is nothing special about
torch.add. The same happens for torch.mul. I haven't checked more operators.
- This only happens if the
uint8 tensor is not scalar and the other tensor is scalar.
Expected behavior
Behavior should follow torch.promote_types().
Environment
PyTorch version: 1.9.0.dev20210517
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 10.2.0
Clang version: Could not collect
CMake version: version 3.20.2
Python version: 3.9 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080
Nvidia driver version: 465.27
cuDNN version: Probably one of the following:
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_adv_infer.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_adv_train.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_cnn_infer.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_cnn_train.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_ops_infer.so.8.1.1
/usr/local/cudnn-8.1.1-cuda-11/lib64/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.20.1
[pip3] torch==1.9.0.dev20210517
[conda] blas 1.0 mkl
[conda] cpuonly 1.0 0 pytorch-nightly
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py39h27cfd23_1
[conda] mkl_fft 1.3.0 py39h42c9631_2
[conda] mkl_random 1.2.1 py39ha9443f7_2
[conda] numpy 1.20.1 py39h93e21f0_0
[conda] numpy-base 1.20.1 py39h7d8b39e_0
[conda] pytorch 1.9.0.dev20210517 py3.9_cpu_0 [cpuonly] pytorch-nightly
cc @brianjo @mruberry @nairbv
TL;DR
I've reported this initially as bug, but it turned out to be more of documentation issue. I'll leave my original comment below.
🐛 Bug
Type promotion is broken for binary ops with uint8 in some edge cases.
To Reproduce
torch.add. The same happens fortorch.mul. I haven't checked more operators.uint8tensor is not scalar and the other tensor is scalar.Expected behavior
Behavior should follow
torch.promote_types().Environment
cc @brianjo @mruberry @nairbv