Skip to content

temporarily duplicating the stack from #75819#75835

Closed
bdhirsh wants to merge 1 commit intogh/bdhirsh/207/basefrom
gh/bdhirsh/207/head
Closed

temporarily duplicating the stack from #75819#75835
bdhirsh wants to merge 1 commit intogh/bdhirsh/207/basefrom
gh/bdhirsh/207/head

Conversation

@bdhirsh
Copy link
Collaborator

@bdhirsh bdhirsh commented Apr 14, 2022

Stack from ghstack:

and exporting it to fbcode
(not actually landing tis)

and exporting it to fbcode
(not actually landing tis)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 14, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit cd97d0f (more details on the Dr. CI page):


  • 11/11 failures introduced in this PR

🕵️ 10 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-bionic-py3.7-clang9 / test (default, 2, 2, linux.2xlarge) (1/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T21:42:57.4578741Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:42:57.4508009Z   tmp = torch.ones(2)
2022-04-14T21:42:57.4524376Z ok (0.003s)
2022-04-14T21:42:57.4536302Z   test_tensor_list_composite (__main__.TestFunctionalization) ... /opt/conda/lib/python3.7/site-packages/torch/functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:4344.)
2022-04-14T21:42:57.4538098Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T21:42:57.4553739Z ok (0.003s)
2022-04-14T21:42:57.4565173Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:2391.)
2022-04-14T21:42:57.4566599Z   tmp = torch.ones(4)
2022-04-14T21:42:57.4577656Z ok (0.002s)
2022-04-14T21:42:57.4578044Z 
2022-04-14T21:42:57.4578205Z ======================================================================
2022-04-14T21:42:57.4578741Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:42:57.4579290Z ----------------------------------------------------------------------
2022-04-14T21:42:57.4580024Z Traceback (most recent call last):
2022-04-14T21:42:57.4580522Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T21:42:57.4580982Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T21:42:57.4581262Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T21:42:57.4581653Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T21:42:57.4582175Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2193, in assertEqual
2022-04-14T21:42:57.4582449Z     msg=msg,
2022-04-14T21:42:57.4582786Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal
2022-04-14T21:42:57.4583107Z     raise error_metas[0].to_error()

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (default, 1, 2, linux.2xlarge) (2/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T21:34:55.6476714Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:34:55.6410857Z   tmp = torch.ones(2)
2022-04-14T21:34:55.6427010Z ok (0.003s)
2022-04-14T21:34:55.6437027Z   test_tensor_list_composite (__main__.TestFunctionalization) ... /opt/conda/lib/python3.7/site-packages/torch/functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:4344.)
2022-04-14T21:34:55.6437921Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T21:34:55.6453337Z ok (0.003s)
2022-04-14T21:34:55.6463529Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:2391.)
2022-04-14T21:34:55.6464244Z   tmp = torch.ones(4)
2022-04-14T21:34:55.6475587Z ok (0.002s)
2022-04-14T21:34:55.6475786Z 
2022-04-14T21:34:55.6476359Z ======================================================================
2022-04-14T21:34:55.6476714Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:34:55.6477291Z ----------------------------------------------------------------------
2022-04-14T21:34:55.6477581Z Traceback (most recent call last):
2022-04-14T21:34:55.6477831Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T21:34:55.6478235Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T21:34:55.6478678Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T21:34:55.6479255Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T21:34:55.6479907Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2193, in assertEqual
2022-04-14T21:34:55.6480184Z     msg=msg,
2022-04-14T21:34:55.6480538Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal
2022-04-14T21:34:55.6480809Z     raise error_metas[0].to_error()

See GitHub Actions build pull / win-vs2019-cpu-py3 / test (default, 2, 2, windows.4xlarge) (3/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T22:32:40.4025081Z FAIL [0.004s]: test_everything (__main__.TestFunctionalization)
2022-04-14T22:32:40.3947553Z   tmp = torch.ones(2)
2022-04-14T22:32:40.3947731Z ok (0.000s)
2022-04-14T22:32:40.3988070Z   test_tensor_list_composite (__main__.TestFunctionalization) ... C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterFunctionalization_2.cpp:4344.)
2022-04-14T22:32:40.3989056Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T22:32:40.3989321Z ok (0.000s)
2022-04-14T22:32:40.4023412Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterFunctionalization_2.cpp:2391.)
2022-04-14T22:32:40.4024301Z   tmp = torch.ones(4)
2022-04-14T22:32:40.4024609Z ok (0.016s)
2022-04-14T22:32:40.4024715Z 
2022-04-14T22:32:40.4024818Z ======================================================================
2022-04-14T22:32:40.4025081Z FAIL [0.004s]: test_everything (__main__.TestFunctionalization)
2022-04-14T22:32:40.4025384Z ----------------------------------------------------------------------
2022-04-14T22:32:40.4025653Z Traceback (most recent call last):
2022-04-14T22:32:40.4026302Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T22:32:40.4026597Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T22:32:40.4026945Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T22:32:40.4027331Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T22:32:40.4027828Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 2160, in assertEqual
2022-04-14T22:32:40.4028168Z     assert_equal(
2022-04-14T22:32:40.4029549Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_comparison.py", line 1074, in assert_equal
2022-04-14T22:32:40.4029924Z     raise error_metas[0].to_error()

See GitHub Actions build pull / linux-xenial-py3.7-gcc7 / test (default, 2, 2, linux.2xlarge) (4/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T21:35:00.6250568Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:35:00.6182857Z   tmp = torch.ones(2)
2022-04-14T21:35:00.6198850Z ok (0.003s)
2022-04-14T21:35:00.6209732Z   test_tensor_list_composite (__main__.TestFunctionalization) ... /opt/conda/lib/python3.7/site-packages/torch/functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:4344.)
2022-04-14T21:35:00.6211289Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T21:35:00.6226176Z ok (0.003s)
2022-04-14T21:35:00.6237509Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:2391.)
2022-04-14T21:35:00.6238756Z   tmp = torch.ones(4)
2022-04-14T21:35:00.6249678Z ok (0.002s)
2022-04-14T21:35:00.6249840Z 
2022-04-14T21:35:00.6250126Z ======================================================================
2022-04-14T21:35:00.6250568Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:35:00.6251385Z ----------------------------------------------------------------------
2022-04-14T21:35:00.6251847Z Traceback (most recent call last):
2022-04-14T21:35:00.6255257Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T21:35:00.6255722Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T21:35:00.6256223Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T21:35:00.6256845Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T21:35:00.6257340Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2193, in assertEqual
2022-04-14T21:35:00.6257612Z     msg=msg,
2022-04-14T21:35:00.6257963Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal
2022-04-14T21:35:00.6258251Z     raise error_metas[0].to_error()

See GitHub Actions build pull / linux-bionic-py3.7-clang9 / test (noarch, 1, 1, linux.2xlarge) (5/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T21:36:36.9548417Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:36:36.9475145Z   tmp = torch.ones(2)
2022-04-14T21:36:36.9492019Z ok (0.003s)
2022-04-14T21:36:36.9503646Z   test_tensor_list_composite (__main__.TestFunctionalization) ... /opt/conda/lib/python3.7/site-packages/torch/functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:4344.)
2022-04-14T21:36:36.9505449Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T21:36:36.9520869Z ok (0.003s)
2022-04-14T21:36:36.9534559Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:2391.)
2022-04-14T21:36:36.9536106Z   tmp = torch.ones(4)
2022-04-14T21:36:36.9547692Z ok (0.003s)
2022-04-14T21:36:36.9547898Z 
2022-04-14T21:36:36.9548041Z ======================================================================
2022-04-14T21:36:36.9548417Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T21:36:36.9548855Z ----------------------------------------------------------------------
2022-04-14T21:36:36.9549128Z Traceback (most recent call last):
2022-04-14T21:36:36.9549531Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T21:36:36.9550048Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T21:36:36.9550376Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T21:36:36.9550744Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T21:36:36.9551214Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2193, in assertEqual
2022-04-14T21:36:36.9551487Z     msg=msg,
2022-04-14T21:36:36.9551835Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal
2022-04-14T21:36:36.9552106Z     raise error_metas[0].to_error()

See GitHub Actions build pull / win-vs2019-cuda11.3-py3 / test (default, 2, 2, windows.8xlarge.nvidia.gpu) (6/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T01:57:52.0795217Z FAIL [0.005s]: test_everything (__main__.TestFunctionalization)
2022-04-15T01:57:52.0710470Z   tmp = torch.ones(2)
2022-04-15T01:57:52.0710670Z ok (0.000s)
2022-04-15T01:57:52.0755594Z   test_tensor_list_composite (__main__.TestFunctionalization) ... C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterFunctionalization_2.cpp:4344.)
2022-04-15T01:57:52.0756826Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-15T01:57:52.0757151Z ok (0.016s)
2022-04-15T01:57:52.0792766Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterFunctionalization_2.cpp:2391.)
2022-04-15T01:57:52.0793984Z   tmp = torch.ones(4)
2022-04-15T01:57:52.0794306Z ok (0.000s)
2022-04-15T01:57:52.0794506Z 
2022-04-15T01:57:52.0794699Z ======================================================================
2022-04-15T01:57:52.0795217Z FAIL [0.005s]: test_everything (__main__.TestFunctionalization)
2022-04-15T01:57:52.0795620Z ----------------------------------------------------------------------
2022-04-15T01:57:52.0795939Z Traceback (most recent call last):
2022-04-15T01:57:52.0806088Z   File "test_functionalization.py", line 297, in test_everything
2022-04-15T01:57:52.0806535Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-15T01:57:52.0806975Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-15T01:57:52.0807447Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-15T01:57:52.0808075Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 2160, in assertEqual
2022-04-15T01:57:52.0808687Z     assert_equal(
2022-04-15T01:57:52.0809145Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_comparison.py", line 1074, in assert_equal
2022-04-15T01:57:52.0809594Z     raise error_metas[0].to_error()

See GitHub Actions build pull / win-vs2019-cuda11.3-py3 / test (force_on_cpu, 1, 1, windows.4xlarge) (7/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T23:03:09.2081328Z FAIL [0.004s]: test_everything (__main__.TestFunctionalization)
2022-04-14T23:03:09.2017741Z   tmp = torch.ones(2)
2022-04-14T23:03:09.2017912Z ok (0.000s)
2022-04-14T23:03:09.2051506Z   test_tensor_list_composite (__main__.TestFunctionalization) ... C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterFunctionalization_2.cpp:4344.)
2022-04-14T23:03:09.2052422Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T23:03:09.2052681Z ok (0.016s)
2022-04-14T23:03:09.2079888Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterFunctionalization_2.cpp:2391.)
2022-04-14T23:03:09.2080687Z   tmp = torch.ones(4)
2022-04-14T23:03:09.2080878Z ok (0.000s)
2022-04-14T23:03:09.2080978Z 
2022-04-14T23:03:09.2081078Z ======================================================================
2022-04-14T23:03:09.2081328Z FAIL [0.004s]: test_everything (__main__.TestFunctionalization)
2022-04-14T23:03:09.2081615Z ----------------------------------------------------------------------
2022-04-14T23:03:09.2081879Z Traceback (most recent call last):
2022-04-14T23:03:09.2082374Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T23:03:09.2082676Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T23:03:09.2082993Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T23:03:09.2083359Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T23:03:09.2083833Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 2160, in assertEqual
2022-04-14T23:03:09.2084149Z     assert_equal(
2022-04-14T23:03:09.2085139Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_comparison.py", line 1074, in assert_equal
2022-04-14T23:03:09.2085498Z     raise error_metas[0].to_error()

See GitHub Actions build pull / linux-bionic-rocm5.0-py3.7 / test (default, 1, 2, linux.rocm.gpu) (8/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T23:19:32.1440521Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T23:19:32.1366835Z   tmp = torch.ones(2)
2022-04-14T23:19:32.1387118Z ok (0.003s)
2022-04-14T23:19:32.1397067Z   test_tensor_list_composite (__main__.TestFunctionalization) ... /opt/conda/lib/python3.7/site-packages/torch/functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:4344.)
2022-04-14T23:19:32.1399480Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T23:19:32.1414592Z ok (0.003s)
2022-04-14T23:19:32.1425850Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:2391.)
2022-04-14T23:19:32.1427728Z   tmp = torch.ones(4)
2022-04-14T23:19:32.1438829Z ok (0.002s)
2022-04-14T23:19:32.1439278Z 
2022-04-14T23:19:32.1439612Z ======================================================================
2022-04-14T23:19:32.1440521Z FAIL [0.003s]: test_everything (__main__.TestFunctionalization)
2022-04-14T23:19:32.1441860Z ----------------------------------------------------------------------
2022-04-14T23:19:32.1442824Z Traceback (most recent call last):
2022-04-14T23:19:32.1443730Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T23:19:32.1444578Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T23:19:32.1445332Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T23:19:32.1446449Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T23:19:32.1447827Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2193, in assertEqual
2022-04-14T23:19:32.1448549Z     msg=msg,
2022-04-14T23:19:32.1449515Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal
2022-04-14T23:19:32.1450374Z     raise error_metas[0].to_error()

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (9/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T22:14:22.5870991Z FAIL [0.004s]: test_everything (__main__.TestFunctionalization)
2022-04-14T22:14:22.5770002Z   tmp = torch.ones(2)
2022-04-14T22:14:22.5797181Z ok (0.004s)
2022-04-14T22:14:22.5810793Z   test_tensor_list_composite (__main__.TestFunctionalization) ... /opt/conda/lib/python3.7/site-packages/torch/functional.py:1254: UserWarning: Note: the functionalization pass encountered an operator (zero_) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:4344.)
2022-04-14T22:14:22.5811971Z   return torch._C._VariableFunctions.block_diag(tensors)  # type: ignore[attr-defined]
2022-04-14T22:14:22.5837803Z ok (0.004s)
2022-04-14T22:14:22.5850989Z   test_view_inplace (__main__.TestFunctionalization) ... test_functionalization.py:237: UserWarning: Note: the functionalization pass encountered an operator (fill_.Scalar) that it could not functionalize, because it couldn't find an out-of-place equivalent of the operator to call. Instead, it's calling the inplace/view operator directly. If this causes problems in your program, consider upstreaming the out-of-place op to PyTorch. (Triggered internally at  /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterFunctionalization_2.cpp:2391.)
2022-04-14T22:14:22.5851930Z   tmp = torch.ones(4)
2022-04-14T22:14:22.5870196Z ok (0.003s)
2022-04-14T22:14:22.5870361Z 
2022-04-14T22:14:22.5870501Z ======================================================================
2022-04-14T22:14:22.5870991Z FAIL [0.004s]: test_everything (__main__.TestFunctionalization)
2022-04-14T22:14:22.5871766Z ----------------------------------------------------------------------
2022-04-14T22:14:22.5872117Z Traceback (most recent call last):
2022-04-14T22:14:22.5872460Z   File "test_functionalization.py", line 297, in test_everything
2022-04-14T22:14:22.5872939Z     self.assert_functionalization(f, torch.ones(4, 2))
2022-04-14T22:14:22.5873313Z   File "test_functionalization.py", line 92, in assert_functionalization
2022-04-14T22:14:22.5873782Z     self.assertEqual(inpt, torch._from_functional_tensor(input_functional))  # input mutations should still occur
2022-04-14T22:14:22.5874418Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2193, in assertEqual
2022-04-14T22:14:22.5877673Z     msg=msg,
2022-04-14T22:14:22.5878304Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal
2022-04-14T22:14:22.5878702Z     raise error_metas[0].to_error()

See GitHub Actions build pull / pytorch-xla-linux-bionic-py3.7-clang8 / test (xla, 1, 1, linux.2xlarge) (10/10)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-14T22:23:23.8385617Z RuntimeError: Erro... 3), stride=(125, 25, 5, 1, 125), storage_offset=0
2022-04-14T22:23:23.8382671Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 780, in test_wrapper
2022-04-14T22:23:23.8382969Z     return test(*args, **kwargs)
2022-04-14T22:23:23.8383236Z   File "/var/lib/jenkins/workspace/xla/test/test_ops.py", line 411, in test_reference_eager
2022-04-14T22:23:23.8383536Z     self.compare_with_eager_reference(op, sample_input)
2022-04-14T22:23:23.8383831Z   File "/var/lib/jenkins/workspace/xla/test/test_ops.py", line 402, in compare_with_eager_reference
2022-04-14T22:23:23.8384162Z     self.assertEqual(actual, expected, exact_dtype=True, exact_device=False)
2022-04-14T22:23:23.8384483Z   File "/var/lib/jenkins/workspace/xla/test/pytorch_test_base.py", line 626, in assertEqual
2022-04-14T22:23:23.8384743Z     x, y = self.prepare_for_compare(x, y)
2022-04-14T22:23:23.8385033Z   File "/var/lib/jenkins/workspace/xla/test/pytorch_test_base.py", line 569, in prepare_for_compare
2022-04-14T22:23:23.8385327Z     x = tx.to(device='cpu')
2022-04-14T22:23:23.8385617Z RuntimeError: Error while lowering: [] aten::as_strided, size=(3, 5, 5, 5, 3), stride=(125, 25, 5, 1, 125), storage_offset=0
2022-04-14T22:23:23.8386114Z Error: /var/lib/jenkins/workspace/xla/torch_xla/csrc/ops/as_strided.cpp:26 : Check failed: storage_offset + slice_size <= input_element_count (1125 vs. 625)
2022-04-14T22:23:23.8386429Z *** Begin stack trace ***
2022-04-14T22:23:23.8386651Z 	tensorflow::CurrentStackTrace[abi:cxx11]()
2022-04-14T22:23:23.8386922Z 	torch_xla::ir::ops::AsStrided::Lower(torch_xla::ir::LoweringContext*) const
2022-04-14T22:23:23.8387227Z 	torch_xla::ir::LoweringContext::LowerNode(torch::lazy::Node const*)
2022-04-14T22:23:23.8387860Z 	torch_xla::ir::LoweringContext::LoweringContext(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, torch_xla::Device, absl::lts_20211102::Span<torch::lazy::Node const* const>, std::unordered_map<torch::lazy::Node const*, torch::lazy::Util::EmitStatus, std::hash<torch::lazy::Node const*>, std::equal_to<torch::lazy::Node const*>, std::allocator<std::pair<torch::lazy::Node const* const, torch::lazy::Util::EmitStatus> > >)
2022-04-14T22:23:23.8388727Z 	torch_xla::XLATensor::Compile(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> > const&, absl::lts_20211102::Span<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const>, torch_xla::XLATensor::SyncTensorCollection const&, torch_xla::XLATensor::PostOrderData*)
2022-04-14T22:23:23.8389515Z 	torch_xla::XLATensor::SyncTensorsGraphInternal(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> >*, absl::lts_20211102::Span<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const>, torch_xla::XLATensor::SyncTensorsConfig const&)
2022-04-14T22:23:23.8390155Z 	torch_xla::XLATensor::SyncTensorsGraph(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> >*, absl::lts_20211102::Span<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const>, bool, bool)
2022-04-14T22:23:23.8390565Z 	torch_xla::XLATensor::ApplyPendingGraph()

🕵️‍♀️ 1 failure not recognized by patterns:

The following CI failures may be due to changes from the PR
Job Step Action
GitHub Actions Lint / lintrunner Run lintrunner on PR files 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

bdhirsh added a commit that referenced this pull request Apr 14, 2022
and exporting it to fbcode
(not actually landing tis)

ghstack-source-id: b486c57
Pull Request resolved: #75835
bdhirsh added a commit that referenced this pull request Apr 14, 2022
and exporting it to fbcode
(not actually landing tis)

ghstack-source-id: b486c57
Pull Request resolved: #75835
@albanD albanD removed their request for review April 18, 2022 14:52
@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Jun 17, 2022
@github-actions github-actions bot closed this Jul 17, 2022
@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/207/head branch August 17, 2022 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants