DOC: add BFloat16 dtype and BFloat16Tensor#37051
DOC: add BFloat16 dtype and BFloat16Tensor#37051mattip wants to merge 1 commit intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit 66cca4b (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 20 times. |
There was a problem hiding this comment.
10 fraction bits, denorm to achieve 11.
There was a problem hiding this comment.
Sorry, not denorm to achieve 10 fraction bits. All floating point formats (including bfloat16) are using implicit bit to "increase" the number of mantissa bits. Denorm numbers may or may not be supported (and possibly they are not supported in bfloat16).
There was a problem hiding this comment.
something wrong with the formatting
There was a problem hiding this comment.
the rendered documentation looks fine
There was a problem hiding this comment.
Rendered file here https://github.com/pytorch/pytorch/blob/9ccff98d6d69119878f0f8c418b96394244ab7c8/docs/source/tensors.rst shows third row "point" "torch.complex128 or "torch.double". I doubt that's intended.
|
Not sure what is up here with CI failures. |
There was a problem hiding this comment.
10 mantissa bits (7 mantissa bits for bfloat16 below)
There was a problem hiding this comment.
Sorry, somehow missed this even though I thought I fixed it. Should be fixed in a335e16
|
rebased, squashed, and made the footnotes on the two tables similar. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Related to pytorchgh-36318 Mention `bfloat16` dtype and `BFloat16Tensor` in documentation. The real fix would be to implement cpu operations on 16-bit float `half`, and I couldn't help but notice that `torch.finfo(torch.bfloat16).xxx` crashes for `xxx in ['max', 'min', 'eps']` Pull Request resolved: pytorch#37051 Differential Revision: D21476851 Pulled By: ngimel fbshipit-source-id: fef601d3116d130d67cd3a5654077f31b699409b
Related to gh-36318
Mention
bfloat16dtype andBFloat16Tensorin documentation. The real fix would be to implement cpu operations on 16-bit floathalf, and I couldn't help but notice thattorch.finfo(torch.bfloat16).xxxcrashes forxxx in ['max', 'min', 'eps']