OpInfos for some Tensor dtype conversion methods#64282
OpInfos for some Tensor dtype conversion methods#64282zou3519 wants to merge 11 commits intogh/zou3519/377/basefrom
Conversation
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests [ghstack-poisoned]
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit a97a285 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
| Job | Step | Action |
|---|---|---|
| Unknown | 🔁 rerun |
This comment was automatically generated by Dr. CI (expand for details).
Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group.
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests ghstack-source-id: a925511 Pull Request resolved: #64282
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests [ghstack-poisoned]
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests ghstack-source-id: 3d3ceb8 Pull Request resolved: #64282
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests [ghstack-poisoned]
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests ghstack-source-id: 7d21bfa Pull Request resolved: #64282
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests [ghstack-poisoned]
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests [ghstack-poisoned]
mruberry
left a comment
There was a problem hiding this comment.
Approving for velocity but check inline review comments requesting a comment for the jit skips and suggesting modeling these after the current contiguous OpInfo which defines a functional variant using a lambda
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests [ghstack-poisoned]
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627) [ghstack-poisoned]
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627) [ghstack-poisoned]
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627) [ghstack-poisoned]
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627) [ghstack-poisoned]
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627) [ghstack-poisoned]
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Pull Request resolved: #64282 OpInfos for: - Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char - Tensor.double, Tensor.float, Tensor.half, Tensor.int - Tensor.short, Tensor.long None of these are supported by TorchScript. Also, the OpInfo autograd test runner assumes that the operation is not allowed to change the dtype of the argument, so only Tensor.double has `supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float, Tensor.half should be differentiable). Test Plan: - run tests Reviewed By: dagitses Differential Revision: D31452627 Pulled By: zou3519 fbshipit-source-id: b7f272e558558412c47aefe947af7f060dfb45c5
Stack from ghstack:
*_likefunctions #65941 OpInfo for*_likefunctionsOpInfos for:
None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
supports_autograd=True(in theory Tensor.bfloat16, Tensor.float,Tensor.half should be differentiable).
Test Plan:
Differential Revision: D31452627