Skip to content

Docs should say that standard arithmetic operations for torch.half are not supported #36318

@pmeier

Description

@pmeier

🐛 Bug

Standard arithmetic operations with dtype==torch.half are broken.

To Reproduce

import torch

x = torch.tensor(1.0, dtype=torch.half)

x + 1.0
x - 1.0
x * 1.0
x / 1.0
RuntimeError: "add_cpu/sub_cpu" not implemented for 'Half'
RuntimeError: "mul_cpu" not implemented for 'Half'
RuntimeError: "div_cpu" not implemented for 'Half'

Environment

Collecting environment information...
PyTorch version: 1.6.0a0+3d199aa
Is debug build: No
CUDA used to build PyTorch: 10.2

OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.17.0

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 440.33.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5

Versions of relevant libraries:
[pip3] numpy==1.18.2
[pip3] torch==1.6.0a0+3d199aa
[pip3] torchvision==0.6.0a0+684f48d
[conda] Could not collect

Build from source a few hours ago.

cc @ezyang @gchanan @zou3519

Metadata

Metadata

Assignees

Labels

high prioritymodule: docsRelated to our documentation, both in docs/ and docblocksmodule: halfRelated to float16 half-precision floatstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions