Skip to content

OpInfo nn.functional.embedding#63633

Closed
krshrimali wants to merge 1 commit intopytorch:masterfrom
krshrimali:opinfo/high_priority/nn/functional/embedding
Closed

OpInfo nn.functional.embedding#63633
krshrimali wants to merge 1 commit intopytorch:masterfrom
krshrimali:opinfo/high_priority/nn/functional/embedding

Conversation

@krshrimali
Copy link
Contributor

Reference: pytorch/functorch#78 and #54261.

TODOs:

  • There are grad tests failing because of max_norm. Need to investigate this.
  • Add more inputs in sample inputs function for completeness.
  • Investigate the documentation and create an issue on what can be improved (and how, if possible).

cc: @mruberry @zou3519

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 20, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 7e464dc (more details on the Dr. CI page):


  • 13/13 failures possibly* introduced in this PR
    • 1/13 non-scanned failure(s)

🕵️ 12 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build win-vs2019-cpu-py3 / test (default, 1, 2, windows.4xlarge) (1/12)

Step: "Run test scripts" (full log | diagnosis details | 🔁 rerun)

2021-08-26T05:23:01.5179222Z AssertionError: Fa....992039680480957), which occurred at index (0, 0).
2021-08-26T05:23:01.5166609Z   File "C:\actions-runner\_work\pytorch\pytorch\pytorch-1169016782\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 373, in instantiated_test
2021-08-26T05:23:01.5167908Z     result = test(self, **param_kwargs)
2021-08-26T05:23:01.5169207Z   File "C:\actions-runner\_work\pytorch\pytorch\pytorch-1169016782\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 766, in test_wrapper
2021-08-26T05:23:01.5170400Z     return test(*args, **kwargs)
2021-08-26T05:23:01.5171175Z   File "test_ops.py", line 744, in test_variant_consistency_jit
2021-08-26T05:23:01.5171909Z     check_against_reference(self,
2021-08-26T05:23:01.5173228Z   File "C:\actions-runner\_work\pytorch\pytorch\pytorch-1169016782\build\win_tmp\build\torch\testing\_internal\common_jit.py", line 134, in check_against_reference
2021-08-26T05:23:01.5174571Z     self.assertEqual(outputs, outputs_test)
2021-08-26T05:23:01.5175928Z   File "C:\actions-runner\_work\pytorch\pytorch\pytorch-1169016782\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 1554, in assertEqual
2021-08-26T05:23:01.5177337Z     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-08-26T05:23:01.5179222Z AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 2 element(s) (out of 2) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 2.779558092355728 (0.21248158812522888 vs. 2.992039680480957), which occurred at index (0, 0).
2021-08-26T05:23:01.5180639Z 
2021-08-26T05:23:04.3425165Z ----------------------------------------------------------------------
2021-08-26T05:23:04.3425685Z Ran 6868 tests in 1079.180s
2021-08-26T05:23:04.3425867Z 
2021-08-26T05:23:04.3426196Z FAILED (failures=1, errors=2, skipped=2867)
2021-08-26T05:23:04.3426449Z 
2021-08-26T05:23:04.3426726Z Generating XML reports...
2021-08-26T05:23:04.3427451Z Generated XML report: test-reports\python-unittest\test_ops\TEST-TestCommonCPU-20210826050502.xml
2021-08-26T05:23:04.3428547Z Generated XML report: test-reports\python-unittest\test_ops\TEST-TestGradientsCPU-20210826050502.xml
2021-08-26T05:23:04.3429630Z Generated XML report: test-reports\python-unittest\test_ops\TEST-TestJitCPU-20210826050502.xml

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (default, 1, 2, linux.2xlarge) (2/12)

Step: "Test PyTorch" (full log | diagnosis details | 🔁 rerun)

2021-08-26T04:20:17.9704149Z AssertionError: Fa....992039680480957), which occurred at index (0, 0).
2021-08-26T04:20:17.9687997Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
2021-08-26T04:20:17.9689596Z     result = test(self, **param_kwargs)
2021-08-26T04:20:17.9691265Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
2021-08-26T04:20:17.9692578Z     return test(*args, **kwargs)
2021-08-26T04:20:17.9693493Z   File "test_ops.py", line 750, in test_variant_consistency_jit
2021-08-26T04:20:17.9694714Z     no_grad=not _requires_grad, no_gradgrad=not op.supports_gradgrad)
2021-08-26T04:20:17.9696339Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_jit.py", line 134, in check_against_reference
2021-08-26T04:20:17.9697799Z     self.assertEqual(outputs, outputs_test)
2021-08-26T04:20:17.9699517Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1554, in assertEqual
2021-08-26T04:20:17.9701094Z     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-08-26T04:20:17.9704149Z AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 2 element(s) (out of 2) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 2.779558092355728 (0.21248158812522888 vs. 2.992039680480957), which occurred at index (0, 0).
2021-08-26T04:20:17.9706044Z 
2021-08-26T04:20:17.9707019Z ----------------------------------------------------------------------
2021-08-26T04:20:17.9707836Z Ran 6868 tests in 613.589s
2021-08-26T04:20:17.9708236Z 
2021-08-26T04:20:17.9708897Z FAILED (failures=1, errors=2, skipped=2860)
2021-08-26T04:20:17.9709452Z 
2021-08-26T04:20:17.9710080Z Generating XML reports...
2021-08-26T04:20:18.2067346Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20210826041004.xml
2021-08-26T04:20:18.6459527Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20210826041004.xml
2021-08-26T04:20:18.6976712Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20210826041004.xml

See GitHub Actions build linux-bionic-py3.8-gcc9-coverage / test (default, 1, 2, linux.2xlarge) (3/12)

Step: "Test PyTorch" (full log | diagnosis details | 🔁 rerun)

2021-08-26T04:32:34.2695944Z AssertionError: Fa....992039680480957), which occurred at index (0, 0).
2021-08-26T04:32:34.2688435Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
2021-08-26T04:32:34.2689091Z     result = test(self, **param_kwargs)
2021-08-26T04:32:34.2689840Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
2021-08-26T04:32:34.2690453Z     return test(*args, **kwargs)
2021-08-26T04:32:34.2690896Z   File "test_ops.py", line 744, in test_variant_consistency_jit
2021-08-26T04:32:34.2691366Z     check_against_reference(self,
2021-08-26T04:32:34.2692148Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_jit.py", line 134, in check_against_reference
2021-08-26T04:32:34.2692831Z     self.assertEqual(outputs, outputs_test)
2021-08-26T04:32:34.2693646Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 1554, in assertEqual
2021-08-26T04:32:34.2694530Z     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-08-26T04:32:34.2695944Z AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 2 element(s) (out of 2) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 2.779558092355728 (0.21248158812522888 vs. 2.992039680480957), which occurred at index (0, 0).
2021-08-26T04:32:34.2696798Z 
2021-08-26T04:32:34.2697357Z ----------------------------------------------------------------------
2021-08-26T04:32:34.2697777Z Ran 6868 tests in 1256.361s
2021-08-26T04:32:34.2697987Z 
2021-08-26T04:32:34.2698314Z FAILED (failures=1, errors=2, skipped=2860)
2021-08-26T04:32:34.2698584Z 
2021-08-26T04:32:34.2698894Z Generating XML reports...
2021-08-26T04:32:34.7308606Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20210826041137.xml
2021-08-26T04:32:35.6599951Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20210826041137.xml
2021-08-26T04:32:35.7807744Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20210826041137.xml

See GitHub Actions build linux-xenial-cuda11.1-py3.6-gcc7 / test (default, 2, 2, linux.8xlarge.nvidia.gpu) (4/12)

Step: "Test PyTorch" (full log | diagnosis details | 🔁 rerun)

2021-08-20T07:17:09.3562769Z AssertionError: Items in the second set but not the first:
2021-08-20T07:17:09.3553880Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
2021-08-20T07:17:09.3554736Z     return test(*args, **kwargs)
2021-08-20T07:17:09.3555725Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 801, in dep_fn
2021-08-20T07:17:09.3556557Z     return fn(slf, *args, **kwargs)
2021-08-20T07:17:09.3557578Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 949, in only_fn
2021-08-20T07:17:09.3558390Z     return fn(self, *args, **kwargs)
2021-08-20T07:17:09.3558965Z   File "test_ops.py", line 153, in test_dtypes
2021-08-20T07:17:09.3559732Z     self.assertEqual(supported_dtypes, claimed_supported, msg=msg)
2021-08-20T07:17:09.3560968Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1595, in assertEqual
2021-08-20T07:17:09.3561984Z     super().assertEqual(x, y, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-08-20T07:17:09.3562769Z AssertionError: Items in the second set but not the first:
2021-08-20T07:17:09.3563383Z torch.float32
2021-08-20T07:17:09.3563824Z torch.bfloat16
2021-08-20T07:17:09.3564288Z torch.float16
2021-08-20T07:17:09.3565237Z torch.float64 : Attempted to compare [set] types: Expected: set(); Actual: {torch.float32, torch.bfloat16, torch.float16, torch.float64}.
2021-08-20T07:17:09.3566534Z The supported dtypes for nn.functional.embedding on cuda according to its OpInfo are
2021-08-20T07:17:09.3567966Z         {torch.float32, torch.bfloat16, torch.float16, torch.float64}, but the detected supported dtypes are set().
2021-08-20T07:17:09.3569295Z         The following dtypes should be removed from the OpInfo: {torch.float32, torch.bfloat16, torch.float16, torch.float64}.
2021-08-20T07:17:09.3570032Z 
2021-08-20T07:17:09.3570731Z ----------------------------------------------------------------------
2021-08-20T07:17:09.3571323Z Ran 217 tests in 16.883s

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 1, 2, linux.8xlarge.nvidia.gpu) (5/12)

Step: "Test PyTorch" (full log | diagnosis details | 🔁 rerun)

2021-08-26T04:24:04.3073293Z AssertionError: Items in the second set but not the first:
2021-08-26T04:24:04.3063597Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
2021-08-26T04:24:04.3064912Z     return test(*args, **kwargs)
2021-08-26T04:24:04.3065972Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 801, in dep_fn
2021-08-26T04:24:04.3066802Z     return fn(slf, *args, **kwargs)
2021-08-26T04:24:04.3067803Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 949, in only_fn
2021-08-26T04:24:04.3068874Z     return fn(self, *args, **kwargs)
2021-08-26T04:24:04.3069421Z   File "test_ops.py", line 153, in test_dtypes
2021-08-26T04:24:04.3070172Z     self.assertEqual(supported_dtypes, claimed_supported, msg=msg)
2021-08-26T04:24:04.3071435Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1597, in assertEqual
2021-08-26T04:24:04.3072438Z     super().assertEqual(x, y, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-08-26T04:24:04.3073293Z AssertionError: Items in the second set but not the first:
2021-08-26T04:24:04.3073889Z torch.float16
2021-08-26T04:24:04.3074338Z torch.float64
2021-08-26T04:24:04.3074768Z torch.float32
2021-08-26T04:24:04.3075744Z torch.bfloat16 : Attempted to compare [set] types: Expected: set(); Actual: {torch.float16, torch.float64, torch.float32, torch.bfloat16}.
2021-08-26T04:24:04.3077029Z The supported dtypes for nn.functional.embedding on cuda according to its OpInfo are
2021-08-26T04:24:04.3078281Z         {torch.float16, torch.float64, torch.float32, torch.bfloat16}, but the detected supported dtypes are set().
2021-08-26T04:24:04.3079573Z         The following dtypes should be removed from the OpInfo: {torch.float16, torch.float64, torch.float32, torch.bfloat16}.
2021-08-26T04:24:04.3080306Z 
2021-08-26T04:24:04.3080988Z ----------------------------------------------------------------------
2021-08-26T04:24:04.3081554Z Ran 217 tests in 15.527s

See CircleCI build pytorch_linux_xenial_py3_clang7_asan_test1 (6/12)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Aug 20 09:42:55 AssertionError: False is not tr....992039680480957), which occurred at index (0, 0).
Aug 20 09:42:55   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
Aug 20 09:42:55     result = test(self, **param_kwargs)
Aug 20 09:42:55   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
Aug 20 09:42:55     return test(*args, **kwargs)
Aug 20 09:42:55   File "test_ops.py", line 750, in test_variant_consistency_jit
Aug 20 09:42:55     no_grad=not _requires_grad, no_gradgrad=not op.supports_gradgrad)
Aug 20 09:42:55   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_jit.py", line 134, in check_against_reference
Aug 20 09:42:55     self.assertEqual(outputs, outputs_test)
Aug 20 09:42:55   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1552, in assertEqual
Aug 20 09:42:55     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Aug 20 09:42:55 AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 2 element(s) (out of 2) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 2.779558092355728 (0.21248158812522888 vs. 2.992039680480957), which occurred at index (0, 0).
Aug 20 09:42:55 
Aug 20 09:42:57 ----------------------------------------------------------------------
Aug 20 09:42:57 Ran 6675 tests in 5617.582s
Aug 20 09:42:57 
Aug 20 09:42:57 FAILED (failures=1, errors=2, skipped=2776)
Aug 20 09:42:57 
Aug 20 09:42:57 Generating XML reports...
Aug 20 09:42:57 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20210820080918.xml
Aug 20 09:42:57 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20210820080918.xml
Aug 20 09:42:57 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20210820080918.xml

See CircleCI build pytorch_linux_bionic_py3_6_clang9_noarch_test (7/12)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Aug 20 07:59:26 AssertionError: False is not tr....992039680480957), which occurred at index (0, 0).
Aug 20 07:59:26   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
Aug 20 07:59:26     result = test(self, **param_kwargs)
Aug 20 07:59:26   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
Aug 20 07:59:26     return test(*args, **kwargs)
Aug 20 07:59:26   File "test_ops.py", line 750, in test_variant_consistency_jit
Aug 20 07:59:26     no_grad=not _requires_grad, no_gradgrad=not op.supports_gradgrad)
Aug 20 07:59:26   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_jit.py", line 134, in check_against_reference
Aug 20 07:59:26     self.assertEqual(outputs, outputs_test)
Aug 20 07:59:26   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1552, in assertEqual
Aug 20 07:59:26     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Aug 20 07:59:26 AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 2 element(s) (out of 2) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 2.779558092355728 (0.21248158812522888 vs. 2.992039680480957), which occurred at index (0, 0).
Aug 20 07:59:26 
Aug 20 07:59:29 ----------------------------------------------------------------------
Aug 20 07:59:29 Ran 13181 tests in 865.693s
Aug 20 07:59:29 
Aug 20 07:59:29 FAILED (failures=1, errors=2, skipped=9280)
Aug 20 07:59:29 
Aug 20 07:59:29 Generating XML reports...
Aug 20 07:59:29 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20210820074500.xml
Aug 20 07:59:29 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonMETA-20210820074500.xml
Aug 20 07:59:29 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20210820074500.xml

See CircleCI build pytorch_linux_xenial_cuda11_1_cudnn8_py3_gcc7_test2 (8/12)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Aug 20 08:52:54 AssertionError: Items in the second set but not the first:
Aug 20 08:52:54   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
Aug 20 08:52:54     return test(*args, **kwargs)
Aug 20 08:52:54   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 801, in dep_fn
Aug 20 08:52:54     return fn(slf, *args, **kwargs)
Aug 20 08:52:54   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 949, in only_fn
Aug 20 08:52:54     return fn(self, *args, **kwargs)
Aug 20 08:52:54   File "test_ops.py", line 153, in test_dtypes
Aug 20 08:52:54     self.assertEqual(supported_dtypes, claimed_supported, msg=msg)
Aug 20 08:52:54   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1595, in assertEqual
Aug 20 08:52:54     super().assertEqual(x, y, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Aug 20 08:52:54 AssertionError: Items in the second set but not the first:
Aug 20 08:52:54 torch.float64
Aug 20 08:52:54 torch.float32
Aug 20 08:52:54 torch.bfloat16
Aug 20 08:52:54 torch.float16 : Attempted to compare [set] types: Expected: set(); Actual: {torch.float64, torch.float32, torch.bfloat16, torch.float16}.
Aug 20 08:52:54 The supported dtypes for nn.functional.embedding on cuda according to its OpInfo are
Aug 20 08:52:54         {torch.float64, torch.float32, torch.bfloat16, torch.float16}, but the detected supported dtypes are set().
Aug 20 08:52:54         The following dtypes should be removed from the OpInfo: {torch.float64, torch.float32, torch.bfloat16, torch.float16}.
Aug 20 08:52:54 
Aug 20 08:52:55 ----------------------------------------------------------------------
Aug 20 08:52:55 Ran 214 tests in 15.619s

See CircleCI build pytorch_linux_bionic_cuda10_2_cudnn7_py3_9_gcc7_test2 (9/12)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Aug 20 07:43:53 AssertionError: Items in the second set but not the first:
Aug 20 07:43:53   File "/opt/conda/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
Aug 20 07:43:53     return test(*args, **kwargs)
Aug 20 07:43:53   File "/opt/conda/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 801, in dep_fn
Aug 20 07:43:53     return fn(slf, *args, **kwargs)
Aug 20 07:43:53   File "/opt/conda/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 949, in only_fn
Aug 20 07:43:53     return fn(self, *args, **kwargs)
Aug 20 07:43:53   File "/var/lib/jenkins/workspace/test/test_ops.py", line 153, in test_dtypes
Aug 20 07:43:53     self.assertEqual(supported_dtypes, claimed_supported, msg=msg)
Aug 20 07:43:53   File "/opt/conda/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 1595, in assertEqual
Aug 20 07:43:53     super().assertEqual(x, y, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Aug 20 07:43:53 AssertionError: Items in the second set but not the first:
Aug 20 07:43:53 torch.float64
Aug 20 07:43:53 torch.float32
Aug 20 07:43:53 torch.bfloat16
Aug 20 07:43:53 torch.float16 : Attempted to compare [set] types: Expected: set(); Actual: {torch.float64, torch.float32, torch.bfloat16, torch.float16}.
Aug 20 07:43:53 The supported dtypes for nn.functional.embedding on cuda according to its OpInfo are
Aug 20 07:43:53         {torch.float64, torch.float32, torch.bfloat16, torch.float16}, but the detected supported dtypes are set().
Aug 20 07:43:53         The following dtypes should be removed from the OpInfo: {torch.float64, torch.float32, torch.bfloat16, torch.float16}.
Aug 20 07:43:53 
Aug 20 07:43:53 ----------------------------------------------------------------------
Aug 20 07:43:53 Ran 214 tests in 14.805s

See CircleCI build pytorch_macos_10_13_py3_test (10/12)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

Aug 20 07:43:38 AssertionError: False is not tr....992039680480957), which occurred at index (0, 0).
Aug 20 07:43:38   File "/Users/distiller/workspace/miniconda3/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
Aug 20 07:43:38     result = test(self, **param_kwargs)
Aug 20 07:43:38   File "/Users/distiller/workspace/miniconda3/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 766, in test_wrapper
Aug 20 07:43:38     return test(*args, **kwargs)
Aug 20 07:43:38   File "test_ops.py", line 750, in test_variant_consistency_jit
Aug 20 07:43:38     no_grad=not _requires_grad, no_gradgrad=not op.supports_gradgrad)
Aug 20 07:43:38   File "/Users/distiller/workspace/miniconda3/lib/python3.7/site-packages/torch/testing/_internal/common_jit.py", line 134, in check_against_reference
Aug 20 07:43:38     self.assertEqual(outputs, outputs_test)
Aug 20 07:43:38   File "/Users/distiller/workspace/miniconda3/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1552, in assertEqual
Aug 20 07:43:38     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Aug 20 07:43:38 AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 2 element(s) (out of 2) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 2.779558092355728 (0.21248158812522888 vs. 2.992039680480957), which occurred at index (0, 0).
Aug 20 07:43:38 
Aug 20 07:43:38 ----------------------------------------------------------------------
Aug 20 07:43:38 Ran 6675 tests in 1190.207s
Aug 20 07:43:38 
Aug 20 07:43:38 FAILED (failures=1, errors=2, skipped=2776)
Aug 20 07:43:38 
Aug 20 07:43:38 Generating XML reports...
Aug 20 07:43:39 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20210820072348.xml
Aug 20 07:43:40 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20210820072348.xml
Aug 20 07:43:40 Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20210820072348.xml

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (11/12)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .github/scripts/generate_ci_workflows.py
Auto-merging .github/scripts/generate_ci_workflows.py
CONFLICT (add/add): Merge conflict in .github/generated-ciflow-ruleset.json
Auto-merging .github/generated-ciflow-ruleset.json
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/setup_ci_environment.sh
Auto-merging .circleci/scripts/setup_ci_environment.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (12/12)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .github/scripts/generate_ci_workflows.py
Auto-merging .github/scripts/generate_ci_workflows.py
CONFLICT (add/add): Merge conflict in .github/generated-ciflow-ruleset.json
Auto-merging .github/generated-ciflow-ruleset.json
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/setup_ci_environment.sh
Auto-merging .circleci/scripts/setup_ci_environment.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@zou3519 zou3519 self-requested a review August 20, 2021 15:22
@ejguan ejguan added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Aug 20, 2021
@krshrimali
Copy link
Contributor Author

Closing this in favor of #63966. :)

@krshrimali krshrimali closed this Aug 26, 2021
@mruberry mruberry reopened this Aug 26, 2021
@mruberry
Copy link
Collaborator

Let's not close this too hastily. The PRs are pretty different so unless they're combined they both seem reasonable. I'l let @zou3519 referee their differences

@zou3519
Copy link
Contributor

zou3519 commented Oct 25, 2021

I think we can close this now, we added support for embedding in #66997

@krshrimali krshrimali closed this Oct 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants