Skip to content

Commit c319136

Browse files
mattipfacebook-github-bot
authored andcommitted
DOC: add BFloat16 dtype and BFloat16Tensor (#37051)
Summary: Related to gh-36318 Mention `bfloat16` dtype and `BFloat16Tensor` in documentation. The real fix would be to implement cpu operations on 16-bit float `half`, and I couldn't help but notice that `torch.finfo(torch.bfloat16).xxx` crashes for `xxx in ['max', 'min', 'eps']` Pull Request resolved: #37051 Differential Revision: D21476851 Pulled By: ngimel fbshipit-source-id: fef601d3116d130d67cd3a5654077f31b699409b
1 parent b290da0 commit c319136

2 files changed

Lines changed: 31 additions & 11 deletions

File tree

docs/source/tensor_attributes.rst

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,23 +15,31 @@ torch.dtype
1515
.. class:: torch.dtype
1616

1717
A :class:`torch.dtype` is an object that represents the data type of a
18-
:class:`torch.Tensor`. PyTorch has eleven different data types:
18+
:class:`torch.Tensor`. PyTorch has twelve different data types:
1919

20-
======================== =========================================== ===========================
20+
========================== =========================================== ===========================
2121
Data type dtype Legacy Constructors
22-
======================== =========================================== ===========================
22+
========================== =========================================== ===========================
2323
32-bit floating point ``torch.float32`` or ``torch.float`` ``torch.*.FloatTensor``
2424
64-bit floating point ``torch.float64`` or ``torch.double`` ``torch.*.DoubleTensor``
2525
64-bit complex ``torch.complex64`` or ``torch.cfloat``
26-
128-bit floating point ``torch.complex128`` or ``torch.cdouble``
27-
16-bit floating point ``torch.float16`` or ``torch.half`` ``torch.*.HalfTensor``
26+
128-bit complex ``torch.complex128`` or ``torch.cdouble``
27+
16-bit floating point [1]_ ``torch.float16`` or ``torch.half`` ``torch.*.HalfTensor``
28+
16-bit floating point [2]_ ``torch.bfloat16`` ``torch.*.BFloat16Tensor``
2829
8-bit integer (unsigned) ``torch.uint8`` ``torch.*.ByteTensor``
2930
8-bit integer (signed) ``torch.int8`` ``torch.*.CharTensor``
3031
16-bit integer (signed) ``torch.int16`` or ``torch.short`` ``torch.*.ShortTensor``
3132
32-bit integer (signed) ``torch.int32`` or ``torch.int`` ``torch.*.IntTensor``
3233
64-bit integer (signed) ``torch.int64`` or ``torch.long`` ``torch.*.LongTensor``
3334
Boolean ``torch.bool`` ``torch.*.BoolTensor``
34-
======================== =========================================== ===========================
35+
========================== =========================================== ===========================
36+
37+
.. [1] Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10
38+
significand bits. Useful when precision is important.
39+
40+
.. [2] Sometimes referred to as Brain Floating Point: use 1 sign, 8 exponent and 7
41+
significand bits. Useful when range is important, since it has the same
42+
number of exponent bits as ``float32``
3543
3644
To find out if a :class:`torch.dtype` is a floating point data type, the property :attr:`is_floating_point`
3745
can be used, which returns ``True`` if the data type is a floating point data type.

docs/source/tensors.rst

Lines changed: 17 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,21 +8,33 @@ torch.Tensor
88
A :class:`torch.Tensor` is a multi-dimensional matrix containing elements of
99
a single data type.
1010

11-
Torch defines nine CPU tensor types and nine GPU tensor types:
11+
Torch defines 10 tensor types with CPU and GPU variants:
1212

13-
======================== =========================================== =========================== ================================
13+
========================== =========================================== ============================= ================================
1414
Data type dtype CPU tensor GPU tensor
15-
======================== =========================================== =========================== ================================
15+
========================== =========================================== ============================= ================================
1616
32-bit floating point ``torch.float32`` or ``torch.float`` :class:`torch.FloatTensor` :class:`torch.cuda.FloatTensor`
1717
64-bit floating point ``torch.float64`` or ``torch.double`` :class:`torch.DoubleTensor` :class:`torch.cuda.DoubleTensor`
18-
16-bit floating point ``torch.float16`` or ``torch.half`` :class:`torch.HalfTensor` :class:`torch.cuda.HalfTensor`
18+
16-bit floating point [1]_ ``torch.float16`` or ``torch.half`` :class:`torch.HalfTensor` :class:`torch.cuda.HalfTensor`
19+
16-bit floating point [2]_ ``torch.bfloat16`` :class:`torch.BFloat16Tensor` :class:`torch.cuda.BFloat16Tensor`
20+
32-bit complex ``torch.complex32``
21+
64-bit complex ``torch.complex64``
22+
128-bit complex ``torch.complex128`` or ``torch.cdouble``
1923
8-bit integer (unsigned) ``torch.uint8`` :class:`torch.ByteTensor` :class:`torch.cuda.ByteTensor`
2024
8-bit integer (signed) ``torch.int8`` :class:`torch.CharTensor` :class:`torch.cuda.CharTensor`
2125
16-bit integer (signed) ``torch.int16`` or ``torch.short`` :class:`torch.ShortTensor` :class:`torch.cuda.ShortTensor`
2226
32-bit integer (signed) ``torch.int32`` or ``torch.int`` :class:`torch.IntTensor` :class:`torch.cuda.IntTensor`
2327
64-bit integer (signed) ``torch.int64`` or ``torch.long`` :class:`torch.LongTensor` :class:`torch.cuda.LongTensor`
2428
Boolean ``torch.bool`` :class:`torch.BoolTensor` :class:`torch.cuda.BoolTensor`
25-
======================== =========================================== =========================== ================================
29+
========================== =========================================== ============================= ================================
30+
31+
.. [1]
32+
Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10
33+
significand bits. Useful when precision is important at the expense of range.
34+
.. [2]
35+
Sometimes referred to as Brain Floating Point: use 1 sign, 8 exponent and 7
36+
significand bits. Useful when range is important, since it has the same
37+
number of exponent bits as ``float32``
2638
2739
:class:`torch.Tensor` is an alias for the default tensor type (:class:`torch.FloatTensor`).
2840

0 commit comments

Comments
 (0)