Skip to content

DOC: add BFloat16 dtype and BFloat16Tensor#37051

Closed
mattip wants to merge 1 commit intopytorch:masterfrom
mattip:issue-36318
Closed

DOC: add BFloat16 dtype and BFloat16Tensor#37051
mattip wants to merge 1 commit intopytorch:masterfrom
mattip:issue-36318

Conversation

@mattip
Copy link
Copy Markdown
Contributor

@mattip mattip commented Apr 22, 2020

Related to gh-36318

Mention bfloat16 dtype and BFloat16Tensor in documentation. The real fix would be to implement cpu operations on 16-bit float half, and I couldn't help but notice that torch.finfo(torch.bfloat16).xxx crashes for xxx in ['max', 'min', 'eps']

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Apr 22, 2020

💊 CI failures summary and remediations

As of commit 66cca4b (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 20 times.

@mruberry mruberry added module: docs Related to our documentation, both in docs/ and docblocks module: bfloat16 labels Apr 22, 2020
@mruberry mruberry requested a review from gchanan April 22, 2020 18:25
@mruberry mruberry added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Apr 22, 2020
Comment thread docs/source/tensor_attributes.rst Outdated
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 fraction bits, denorm to achieve 11.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, not denorm to achieve 10 fraction bits. All floating point formats (including bfloat16) are using implicit bit to "increase" the number of mantissa bits. Denorm numbers may or may not be supported (and possibly they are not supported in bfloat16).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks

Comment thread docs/source/tensor_attributes.rst Outdated
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7 fraction bits.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment thread docs/source/tensors.rst Outdated
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something wrong with the formatting

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the rendered documentation looks fine

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rendered file here https://github.com/pytorch/pytorch/blob/9ccff98d6d69119878f0f8c418b96394244ab7c8/docs/source/tensors.rst shows third row "point" "torch.complex128 or "torch.double". I doubt that's intended.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, thanks. Fixing

@mattip
Copy link
Copy Markdown
Contributor Author

mattip commented May 6, 2020

Not sure what is up here with CI failures.

Comment thread docs/source/tensors.rst Outdated
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 mantissa bits (7 mantissa bits for bfloat16 below)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, somehow missed this even though I thought I fixed it. Should be fixed in a335e16

@mattip
Copy link
Copy Markdown
Contributor Author

mattip commented May 8, 2020

rebased, squashed, and made the footnotes on the two tables similar.

Copy link
Copy Markdown
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ngimel merged this pull request in c319136.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Related to pytorchgh-36318

Mention `bfloat16` dtype and `BFloat16Tensor` in documentation. The real fix would be to implement cpu operations on 16-bit float `half`, and I couldn't help but notice that `torch.finfo(torch.bfloat16).xxx` crashes for `xxx in ['max', 'min', 'eps']`
Pull Request resolved: pytorch#37051

Differential Revision: D21476851

Pulled By: ngimel

fbshipit-source-id: fef601d3116d130d67cd3a5654077f31b699409b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: bfloat16 module: docs Related to our documentation, both in docs/ and docblocks open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants