Skip to content

OpInfos for some Tensor dtype conversion methods#64282

Closed
zou3519 wants to merge 11 commits intogh/zou3519/377/basefrom
gh/zou3519/377/head
Closed

OpInfos for some Tensor dtype conversion methods#64282
zou3519 wants to merge 11 commits intogh/zou3519/377/basefrom
gh/zou3519/377/head

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Aug 31, 2021

Stack from ghstack:

OpInfos for:

  • Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
  • Tensor.double, Tensor.float, Tensor.half, Tensor.int
  • Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
supports_autograd=True (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:

  • run tests

Differential Revision: D31452627

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 31, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit a97a285 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build periodic-linux-xenial-cuda11.1-py3.6-gcc7 / build (1/1)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-10-11T20:32:48.5850196Z �[0m�[1m�[31mERROR...eof ((socklen_t)))\n ^\n" }
2021-10-11T20:32:48.5835403Z �[0m�[1m�[31mERROR�[0m 2021-10-11T20:29:47Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:332:2: error: \'struct sockaddr\' has no member named \'sa_len\'\n x.sa_len = 0;\n  ^\n" }
2021-10-11T20:32:48.5836357Z 
2021-10-11T20:32:48.5839113Z �[0m�[1m�[31mERROR�[0m 2021-10-11T20:29:50Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:366:10: error: \'RTLD_MEMBER\' undeclared (first use in this function); did you mean \'RTLD_NEXT\'?\n   (void) RTLD_MEMBER;\n          ^~~~~~~~~~~\n          RTLD_NEXT\nconftest.c:366:10: note: each undeclared identifier is reported only once for each function it appears in\n" }
2021-10-11T20:32:48.5840979Z 
2021-10-11T20:32:48.5842838Z �[0m�[1m�[31mERROR�[0m 2021-10-11T20:29:50Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c:361:9: error: unknown type name \'not\'\n         not a universal capable compiler\n         ^~~\nconftest.c:361:15: error: expected \'=\', \',\', \';\', \'asm\' or \'__attribute__\' before \'universal\'\n         not a universal capable compiler\n               ^~~~~~~~~\nconftest.c:361:15: error: unknown type name \'universal\'\n" }
2021-10-11T20:32:48.5843998Z 
2021-10-11T20:32:48.5845548Z �[0m�[1m�[31mERROR�[0m 2021-10-11T20:29:50Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:367:4: error: unknown type name \'not\'; did you mean \'ino_t\'?\n    not big endian\n    ^~~\n    ino_t\nconftest.c:367:12: error: expected \'=\', \',\', \';\', \'asm\' or \'__attribute__\' before \'endian\'\n    not big endian\n            ^~~~~~\n" }
2021-10-11T20:32:48.5846597Z 
2021-10-11T20:32:48.5847949Z �[0m�[1m�[31mERROR�[0m 2021-10-11T20:29:52Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:378:4: error: \'struct stat\' has no member named \'st_mtimespec\'; did you mean \'st_mtim\'?\n st.st_mtimespec.tv_nsec = 1;\n    ^~~~~~~~~~~~\n    st_mtim\n" }
2021-10-11T20:32:48.5848846Z 
2021-10-11T20:32:48.5850196Z �[0m�[1m�[31mERROR�[0m 2021-10-11T20:29:53Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:402:24: error: expected expression before \')\' token\n if (sizeof ((socklen_t)))\n                        ^\n" }
2021-10-11T20:32:48.5851086Z 
2021-10-11T20:32:48.5977076Z �[0m�[1m�[31mERROR�[0m 2021-10-11T20:31:12Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "In file included from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/Storage.cpp:392:0\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[Ktorch/csrc/generic/Storage.cpp:1\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/aten/src/TH/THGenerateFloatTypes.h:10\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/aten/src/TH/THGenerateAllTypes.h:10\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/Storage.cpp:25\u{1b}[m\u{1b}[K:\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:\u{1b}[m\u{1b}[K In function \'\u{1b}[01m\u{1b}[KPyObject* THPFloatStorage_elementSize(PyObject*, PyObject*)\u{1b}[m\u{1b}[K\':\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:54:8:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kunused variable \'\u{1b}[01m\u{1b}[Kself\u{1b}[m\u{1b}[K\' [\u{1b}[01;31m\u{1b}[K-Werror=unused-variable\u{1b}[m\u{1b}[K]\n   auto \u{1b}[01;31m\u{1b}[Kself\u{1b}[m\u{1b}[K = (THPStorage*)_self;\n        \u{1b}[01;31m\u{1b}[K^~~~\u{1b}[m\u{1b}[K\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:\u{1b}[m\u{1b}[K In function \'\u{1b}[01m\u{1b}[KPyObject* THPFloatStorage_new(PyObject*, PyObject*)\u{1b}[m\u{1b}[K\':\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:62:8:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kunused variable \'\u{1b}[01m\u{1b}[Kself\u{1b}[m\u{1b}[K\' [\u{1b}[01;31m\u{1b}[K-Werror=unused-variable\u{1b}[m\u{1b}[K]\n   auto \u{1b}[01;31m\u{1b}[Kself\u{1b}[m\u{1b}[K = (THPStorage*)_self;\n        \u{1b}[01;31m\u{1b}[K^~~~\u{1b}[m\u{1b}[K\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:\u{1b}[m\u{1b}[K In function \'\u{1b}[01m\u{1b}[KPyObject* THPFloatStorage_newWithFile(PyObject*, PyObject*)\u{1b}[m\u{1b}[K\':\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:289:13:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kunused variable \'\u{1b}[01m\u{1b}[Kfd_obj\u{1b}[m\u{1b}[K\' [\u{1b}[01;31m\u{1b}[K-Werror=unused-variable\u{1b}[m\u{1b}[K]\n   PyObject *\u{1b}[01;31m\u{1b}[Kfd_obj\u{1b}[m\u{1b}[K = PyTuple_GetItem(args, 0);\n             \u{1b}[01;31m\u{1b}[K^~~~~~\u{1b}[m\u{1b}[K\nIn file included from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/Storage.cpp:392:0\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[Ktorch/csrc/generic/Storage.cpp:1\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/aten/src/TH/THGenerateFloatTypes.h:11\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/aten/src/TH/THGenerateAllTypes.h:10\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/Storage.cpp:25\u{1b}[m\u{1b}[K:\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:\u{1b}[m\u{1b}[K In function \'\u{1b}[01m\u{1b}[KPyObject* THPDoubleStorage_elementSize(PyObject*, PyObject*)\u{1b}[m\u{1b}[K\':\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:54:8:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kunused variable \'\u{1b}[01m\u{1b}[Kself\u{1b}[m\u{1b}[K\' [\u{1b}[01;31m\u{1b}[K-Werror=unused-variable\u{1b}[m\u{1b}[K]\n   auto \u{1b}[01;31m\u{1b}[Kself\u{1b}[m\u{1b}[K = (THPStorage*)_self;\n        \u{1b}[01;31m\u{1b}[K^~~~\u{1b}[m\u{1b}[K\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:\u{1b}[m\u{1b}[K In function \'\u{1b}[01m\u{1b}[KPyObject* THPDoubleStorage_new(PyObject*, PyObject*)\u{1b}[m\u{1b}[K\':\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:62:8:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kunused variable \'\u{1b}[01m\u{1b}[Kself\u{1b}[m\u{1b}[K\' [\u{1b}[01;31m\u{1b}[K-Werror=unused-variable\u{1b}[m\u{1b}[K]\n   auto \u{1b}[01;31m\u{1b}[Kself\u{1b}[m\u{1b}[K = (THPStorage*)_self;\n        \u{1b}[01;31m\u{1b}[K^~~~\u{1b}[m\u{1b}[K\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:\u{1b}[m\u{1b}[K In function \'\u{1b}[01m\u{1b}[KPyObject* THPDoubleStorage_newWithFile(PyObject*, PyObject*)\u{1b}[m\u{1b}[K\':\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/generic/StorageMethods.cpp:289:13:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kunused variable \'\u{1b}[01m\u{1b}[Kfd_obj\u{1b}[m\u{1b}[K\' [\u{1b}[01;31m\u{1b}[K-Werror=unused-variable\u{1b}[m\u{1b}[K]\n   PyObject *\u{1b}[01;31m\u{1b}[Kfd_obj\u{1b}[m\u{1b}[K = PyTuple_GetItem(args, 0);\n             \u{1b}[01;31m\u{1b}[K^~~~~~\u{1b}[m\u{1b}[K\nIn 
2021-10-11T20:32:48.6063056Z 
2021-10-11T20:32:48.6063624Z =========== If your build fails, please take a look at the log above for possible reasons ===========
2021-10-11T20:32:48.6064136Z Compile requests                    8637
2021-10-11T20:32:48.6064526Z Compile requests executed           6689
2021-10-11T20:32:48.6064900Z Cache hits                          6599
2021-10-11T20:32:48.6065216Z Cache hits (C/C++)                  6295
2021-10-11T20:32:48.6065550Z Cache hits (CUDA)                    304
2021-10-11T20:32:48.6065883Z Cache misses                          15

1 failure not recognized by patterns:

Job Step Action
GitHub Actions linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 1, 2, linux.8xlarge.nvidia.gpu) Unknown 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

zou3519 added a commit that referenced this pull request Aug 31, 2021
OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

ghstack-source-id: a925511
Pull Request resolved: #64282
OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Sep 1, 2021
OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

ghstack-source-id: 3d3ceb8
Pull Request resolved: #64282
OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Sep 30, 2021
OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

ghstack-source-id: 7d21bfa
Pull Request resolved: #64282
@pytorch-probot
Copy link

pytorch-probot bot commented Sep 30, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/a97a28546f44c72ea7bd53e4f6378b4e04d4d84f/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default,ciflow/all

Workflows Labels (bold enabled) Status
Triggered Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux ✅ triggered
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux ✅ triggered
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow ✅ triggered
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux ✅ triggered
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled ✅ triggered
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck ✅ triggered
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled ✅ triggered
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win ✅ triggered
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only minor comments inline. Otherwise LGTM when CI is green!

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

[ghstack-poisoned]
OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

[ghstack-poisoned]
@zou3519 zou3519 requested a review from pmeier October 6, 2021 15:46
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving for velocity but check inline review comments requesting a comment for the jit skips and suggesting modeling these after the current contiguous OpInfo which defines a functional variant using a lambda

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

[ghstack-poisoned]
@zou3519
Copy link
Contributor Author

zou3519 commented Oct 6, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627)

[ghstack-poisoned]
@zou3519
Copy link
Contributor Author

zou3519 commented Oct 6, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627)

[ghstack-poisoned]
@zou3519
Copy link
Contributor Author

zou3519 commented Oct 7, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627)

[ghstack-poisoned]
@zou3519
Copy link
Contributor Author

zou3519 commented Oct 9, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627)

[ghstack-poisoned]
@zou3519
Copy link
Contributor Author

zou3519 commented Oct 9, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan:
- run tests

Differential Revision: [D31452627](https://our.internmc.facebook.com/intern/diff/D31452627)

[ghstack-poisoned]
@zou3519
Copy link
Contributor Author

zou3519 commented Oct 11, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in 5d44529.

@facebook-github-bot facebook-github-bot deleted the gh/zou3519/377/head branch October 18, 2021 14:44
wconstab pushed a commit that referenced this pull request Oct 20, 2021
Summary:
Pull Request resolved: #64282

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan: - run tests

Reviewed By: dagitses

Differential Revision: D31452627

Pulled By: zou3519

fbshipit-source-id: b7f272e558558412c47aefe947af7f060dfb45c5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants