Skip to content

Add Dropout1d module#79545

Closed
jbschlosser wants to merge 5 commits intogh/jbschlosser/39/basefrom
gh/jbschlosser/39/head
Closed

Add Dropout1d module#79545
jbschlosser wants to merge 5 commits intogh/jbschlosser/39/basefrom
gh/jbschlosser/39/head

Conversation

@jbschlosser
Copy link
Copy Markdown
Contributor

@jbschlosser jbschlosser commented Jun 14, 2022

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Jun 14, 2022

🔗 Helpful links

❌ 18 New Failures

As of commit d90de21 (more details on the Dr. CI page):

Expand to see more
  • 18/18 failures introduced in this PR

🕵️ 17 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-focal-py3.7-clang7-asan / test (default, 1, 5, linux.2xlarge) (1/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T20:10:22.1871158Z SUMMARY: Undefined.../jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in
2022-06-14T20:10:22.1851684Z     #10 0x5592b7945c81 in run_mod /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:1037
2022-06-14T20:10:22.1852246Z     #11 0x5592b7950c69 in PyRun_StringFlags /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:961
2022-06-14T20:10:22.1853357Z     #12 0x5592b7950ccb in PyRun_SimpleStringFlags /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:455
2022-06-14T20:10:22.1854532Z     #13 0x5592b7950dc8 in pymain_run_command /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:420
2022-06-14T20:10:22.1855174Z     #14 0x5592b7950dc8 in pymain_run_python /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:2907
2022-06-14T20:10:22.1855638Z     #15 0x5592b7950dc8 in pymain_main /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:3460
2022-06-14T20:10:22.1856503Z     #16 0x5592b795118b in _Py_UnixMain /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:3495
2022-06-14T20:10:22.1869877Z     #17 0x7f73a8395082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
2022-06-14T20:10:22.1870355Z     #18 0x5592b78f6039 in _start (/opt/conda/bin/python3.7+0x1d8039)
2022-06-14T20:10:22.1870578Z 
2022-06-14T20:10:22.1871158Z SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in 
2022-06-14T20:10:22.2166061Z + retcode=1
2022-06-14T20:10:22.2166516Z + set -e
2022-06-14T20:10:22.2166754Z + return 1
2022-06-14T20:10:22.2168939Z + [[ linux-focal-py3.7-clang7-asan-default == *-NO_AVX-* ]]
2022-06-14T20:10:22.2169426Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X ]]
2022-06-14T20:10:22.2170009Z + [[ linux-focal-py3.7-clang7-asan-default == *-NO_AVX2-* ]]
2022-06-14T20:10:22.2170641Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]]
2022-06-14T20:10:22.2171248Z + [[ linux-focal-py3.7-clang7-asan-default == *-NO_AVX512-* ]]
2022-06-14T20:10:22.2171746Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\5\1\2 ]]
2022-06-14T20:10:22.2173916Z + [[ linux-focal-py3.7-clang7-asan-default == *tbb* ]]

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed, 2, 2, linux.8xlarge.nvidia.gpu) (2/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T20:51:28.0271975Z RuntimeError: Expe...e, but found at least two devices, cuda:0 and cpu!
2022-06-14T20:51:28.0249072Z frame #36: clone + 0x6d (0x7f9a9d7e051d in /lib/x86_64-linux-gnu/libc.so.6)
2022-06-14T20:51:28.0249297Z 
2022-06-14T20:51:28.0249320Z 
2022-06-14T20:51:28.0249456Z On WorkerInfo(id=3, name=worker3):
2022-06-14T20:51:28.0262011Z RuntimeError('Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!\nException raised from compute_types at /var/lib/jenkins/workspace/aten/src/ATen/TensorIterator.cpp:484 (most recent call first):\nframe #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f50c620120b in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)\nframe #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x7f50c61fcbde in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)\nframe #2: at::TensorIteratorBase::compute_types(at::TensorIteratorConfig const&) + 0xc7b (0x7f50d04bbb3b in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #3: at::TensorIteratorBase::build(at::TensorIteratorConfig&) + 0x7f (0x7f50d04bcfff in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #4: at::TensorIteratorBase::build_borrowing_binary_op(at::TensorBase const&, at::TensorBase const&, at::TensorBase const&) + 0xf2 (0x7f50d04be7f2 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #5: at::meta::structured_add_Tensor::meta(at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 0x2e (0x7f50d068c7be in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #6: <unknown function> + 0x2be960e (0x7f50c902960e in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)\nframe #7: <unknown function> + 0x2be9716 (0x7f50c9029716 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)\nframe #8: at::_ops::add_Tensor::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 0x98 (0x7f50d0fba118 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #9: <unknown function> + 0x2e2f955 (0x7f50d24f5955 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #10: <unknown function> + 0x2e300d9 (0x7f50d24f60d9 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #11: at::_ops::add_Tensor::call(at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 0x172 (0x7f50d0fede42 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #12: <unknown function> + 0x2de707 (0x7f50dd4e1707 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\nframe #13: <unknown function> + 0x2dea46 (0x7f50dd4e1a46 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\nframe #14: _PyMethodDef_RawFastCallDict + 0x264 (0x55c4a8e893f4 in /opt/conda/bin/python)\nframe #15: _PyObject_FastCallDict + 0x6e (0x55c4a8e5a2ee in /opt/conda/bin/python)\nframe #16: <unknown function> + 0x135eb0 (0x55c4a8e75eb0 in /opt/conda/bin/python)\nframe #17: <unknown function> + 0x1f5a6f (0x55c4a8f35a6f in /opt/conda/bin/python)\nframe #18: PyNumber_Add + 0x41 (0x55c4a8e940d1 in /opt/conda/bin/python)\nframe #19: _PyEval_EvalFrameDefault + 0xfba (0x55c4a8f03f5a in /opt/conda/bin/python)\nframe #20: _PyFunction_FastCallDict + 0x118 (0x55c4a8e77cf8 in /opt/conda/bin/python)\nframe #21: _PyEval_EvalFrameDefault + 0x1cb8 (0x55c4a8f04c58 in /opt/conda/bin/python)\nframe #22: _PyFunction_FastCallDict + 0x118 (0x55c4a8e77cf8 in /opt/conda/bin/python)\nframe #23: <unknown function> + 0x9c0a7f (0x7f50ddbc3a7f in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\nframe #24: torch::distributed::rpc::PythonRpcHandler::runPythonUdf(pybind11::object const&) + 0x7d (0x7f50ddbc242d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\nframe #25: torch::distributed::rpc::RequestCallbackImpl::runPythonFunction(pybind11::object const&, std::vector<c10::Stream, std::allocator<c10::Stream> >, bool) const + 0x83 (0x7f50ddbc4e13 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\nframe #26: torch::distributed::rpc::RequestCallbackImpl::processPythonCall(torch::distributed::rpc::RpcCommandBase&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0x96 (0x7f50ddbc8e56 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\nframe #27: torch::distributed::rpc::RequestCallbackNoPython::processRpc(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0x10c (0x7f50d36ca59c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\nframe #28: torch::distributed::rpc::RequestCallbackImpl::processRpcWithErrors(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0x65 (0x7f50ddbc4c05 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\nframe #29: <unknown function> + 0x3ffd9c3 (0x7f50d36c39c3 in /
2022-06-14T20:51:28.0269445Z Traceback (most recent call last):
2022-06-14T20:51:28.0270031Z   File "/opt/conda/lib/python3.7/site-packages/torch/distributed/rpc/internal.py", line 206, in _run_function
2022-06-14T20:51:28.0270497Z     result = python_udf.func(*python_udf.args, **python_udf.kwargs)
2022-06-14T20:51:28.0271172Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 6211, in _gpu_add_wrong_gpus
2022-06-14T20:51:28.0271573Z     return x.cpu() + y.cuda()
2022-06-14T20:51:28.0271975Z RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
2022-06-14T20:51:28.0272520Z Exception raised from compute_types at /var/lib/jenkins/workspace/aten/src/ATen/TensorIterator.cpp:484 (most recent call first):
2022-06-14T20:51:28.0273378Z frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f50c620120b in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)
2022-06-14T20:51:28.0274348Z frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x7f50c61fcbde in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)
2022-06-14T20:51:28.0275253Z frame #2: at::TensorIteratorBase::compute_types(at::TensorIteratorConfig const&) + 0xc7b (0x7f50d04bbb3b in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
2022-06-14T20:51:28.0276066Z frame #3: at::TensorIteratorBase::build(at::TensorIteratorConfig&) + 0x7f (0x7f50d04bcfff in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
2022-06-14T20:51:28.0276972Z frame #4: at::TensorIteratorBase::build_borrowing_binary_op(at::TensorBase const&, at::TensorBase const&, at::TensorBase const&) + 0xf2 (0x7f50d04be7f2 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
2022-06-14T20:51:28.0277884Z frame #5: at::meta::structured_add_Tensor::meta(at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 0x2e (0x7f50d068c7be in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
2022-06-14T20:51:28.0278598Z frame #6: <unknown function> + 0x2be960e (0x7f50c902960e in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)
2022-06-14T20:51:28.0279257Z frame #7: <unknown function> + 0x2be9716 (0x7f50c9029716 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)
2022-06-14T20:51:28.0280078Z frame #8: at::_ops::add_Tensor::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 0x98 (0x7f50d0fba118 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)

See GitHub Actions build pull / linux-focal-py3.7-gcc7 / test (distributed, 1, 1, linux.2xlarge) (3/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T20:43:26.3007795Z [E request_callbac...yUniqueId(created_on=0, local_id=0) to be created.
2022-06-14T20:43:22.8940224Z INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmp6q4wxozb
2022-06-14T20:43:22.8940937Z INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmp6q4wxozb/_remote_module_non_scriptable.py
2022-06-14T20:43:22.9438662Z INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmp8_gqbxdt
2022-06-14T20:43:22.9439469Z INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmp8_gqbxdt/_remote_module_non_scriptable.py
2022-06-14T20:43:22.9459623Z INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmprv00iw18
2022-06-14T20:43:22.9460951Z INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmprv00iw18/_remote_module_non_scriptable.py
2022-06-14T20:43:23.1535400Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 0
2022-06-14T20:43:23.1593750Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 3
2022-06-14T20:43:23.2083123Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 2
2022-06-14T20:43:23.2263529Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 1
2022-06-14T20:43:26.3007795Z [E request_callback_no_python.cpp:559] Received error while processing request type 261: false INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/distributed/rpc/rref_context.cpp":387, please report a bug to PyTorch. Expected OwnerRRef with id GloballyUniqueId(created_on=0, local_id=0) to be created.
2022-06-14T20:43:26.3008578Z Exception raised from getOwnerRRef at /var/lib/jenkins/workspace/torch/csrc/distributed/rpc/rref_context.cpp:387 (most recent call first):
2022-06-14T20:43:26.3009463Z frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7faf642e056b in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)
2022-06-14T20:43:26.3010517Z frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x7faf642dbf3e in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)
2022-06-14T20:43:26.3011977Z frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x4e (0x7faf642ddcde in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)
2022-06-14T20:43:26.3012837Z frame #3: torch::distributed::rpc::RRefContext::getOwnerRRef(torch::distributed::rpc::GloballyUniqueId const&, bool) + 0x440 (0x7faf68306ed0 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
2022-06-14T20:43:26.3013836Z frame #4: torch::distributed::rpc::RequestCallbackNoPython::assignOwnerRRef(torch::distributed::rpc::GloballyUniqueId const&, torch::distributed::rpc::GloballyUniqueId const&, c10::intrusive_ptr<c10::ivalue::Future, c10::detail::intrusive_target_default_null_type<c10::ivalue::Future> >) const + 0x5ec (0x7faf682f72bc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
2022-06-14T20:43:26.3014824Z frame #5: torch::distributed::rpc::RequestCallbackImpl::processPythonRemoteCall(torch::distributed::rpc::RpcCommandBase&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0xcc (0x7faf7000ab7c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
2022-06-14T20:43:26.3015771Z frame #6: torch::distributed::rpc::RequestCallbackNoPython::processRpc(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0x194 (0x7faf682fbb44 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
2022-06-14T20:43:26.3016745Z frame #7: torch::distributed::rpc::RequestCallbackImpl::processRpcWithErrors(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0x65 (0x7faf7000a2d5 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
2022-06-14T20:43:26.3017426Z frame #8: <unknown function> + 0x3fd4ee3 (0x7faf682f4ee3 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 4, 4, linux.4xlarge.nvidia.gpu) (4/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T20:52:11.6760240Z test_RNN_change_...as.so.11: undefined symbol: cublasGetSmCountTarget
2022-06-14T20:52:11.5214121Z   test_PoissonNLLLoss_no_full_loss_no_log_input_cuda_double (__main__.TestNN) ... ok (0.003s)
2022-06-14T20:52:11.5242734Z   test_PoissonNLLLoss_no_full_loss_no_log_input_cuda_float (__main__.TestNN) ... ok (0.003s)
2022-06-14T20:52:11.5271380Z   test_PoissonNLLLoss_no_full_loss_no_log_input_cuda_half (__main__.TestNN) ... ok (0.003s)
2022-06-14T20:52:11.5410168Z   test_PoissonNLLLoss_no_reduce (__main__.TestNN) ... ok (0.014s)
2022-06-14T20:52:11.5574830Z   test_PoissonNLLLoss_no_reduce_cuda (__main__.TestNN) ... ok (0.016s)
2022-06-14T20:52:11.5664646Z   test_RNN_cell (__main__.TestNN) ... ok (0.009s)
2022-06-14T20:52:11.5785501Z   test_RNN_cell_forward_hidden_size (__main__.TestNN) ... ok (0.012s)
2022-06-14T20:52:11.5870808Z   test_RNN_cell_forward_input_size (__main__.TestNN) ... ok (0.008s)
2022-06-14T20:52:11.5898473Z   test_RNN_cell_forward_zero_hidden_size (__main__.TestNN) ... ok (0.003s)
2022-06-14T20:52:11.6335377Z   test_RNN_cell_no_broadcasting (__main__.TestNN) ... ok (0.043s)
2022-06-14T20:52:11.6760240Z   test_RNN_change_dropout (__main__.TestNN) ... Could not load symbol cublasGetSmCountTarget from libcublas.so.11. Error: /usr/local/cuda/lib64/libcublas.so.11: undefined symbol: cublasGetSmCountTarget
2022-06-14T20:52:11.7602376Z ok (0.126s)
2022-06-14T20:52:15.2283325Z   test_RNN_cpu_vs_cudnn_no_dropout (__main__.TestNN) ... ok (3.468s)
2022-06-14T20:52:18.6937204Z   test_RNN_cpu_vs_cudnn_with_dropout (__main__.TestNN) ... ok (3.465s)
2022-06-14T20:52:18.6991533Z   test_RNN_cudnn_weight_norm (__main__.TestNN) ... /opt/conda/lib/python3.7/site-packages/torch/nn/modules/rnn.py:770: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). (Triggered internally at  /var/lib/jenkins/workspace/aten/src/ATen/native/cudnn/RNN.cpp:968.)
2022-06-14T20:52:18.6992495Z   self.dropout, self.training, self.bidirectional, self.batch_first)
2022-06-14T20:52:18.7008022Z /opt/conda/lib/python3.7/site-packages/torch/nn/modules/rnn.py:770: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). (Triggered internally at  /var/lib/jenkins/workspace/aten/src/ATen/native/cudnn/RNN.cpp:968.)
2022-06-14T20:52:18.7009151Z   self.dropout, self.training, self.bidirectional, self.batch_first)
2022-06-14T20:52:18.7056383Z /opt/conda/lib/python3.7/site-packages/torch/nn/modules/rnn.py:770: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). (Triggered internally at  /var/lib/jenkins/workspace/aten/src/ATen/native/cudnn/RNN.cpp:968.)
2022-06-14T20:52:18.7057262Z   self.dropout, self.training, self.bidirectional, self.batch_first)
2022-06-14T20:52:18.7073569Z /opt/conda/lib/python3.7/site-packages/torch/nn/modules/rnn.py:770: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). (Triggered internally at  /var/lib/jenkins/workspace/aten/src/ATen/native/cudnn/RNN.cpp:968.)

See GitHub Actions build pull / linux-focal-py3.7-clang7-asan / test (default, 2, 5, linux.2xlarge) (5/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T20:10:43.2005233Z SUMMARY: Undefined.../jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in
2022-06-14T20:10:43.1985980Z     #10 0x5566ebc96c81 in run_mod /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:1037
2022-06-14T20:10:43.1986422Z     #11 0x5566ebca1c69 in PyRun_StringFlags /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:961
2022-06-14T20:10:43.1987292Z     #12 0x5566ebca1ccb in PyRun_SimpleStringFlags /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:455
2022-06-14T20:10:43.1988085Z     #13 0x5566ebca1dc8 in pymain_run_command /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:420
2022-06-14T20:10:43.1988771Z     #14 0x5566ebca1dc8 in pymain_run_python /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:2907
2022-06-14T20:10:43.1989448Z     #15 0x5566ebca1dc8 in pymain_main /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:3460
2022-06-14T20:10:43.1989945Z     #16 0x5566ebca218b in _Py_UnixMain /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:3495
2022-06-14T20:10:43.2003364Z     #17 0x7f3a06821082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
2022-06-14T20:10:43.2003927Z     #18 0x5566ebc47039 in _start (/opt/conda/bin/python3.7+0x1d8039)
2022-06-14T20:10:43.2004189Z 
2022-06-14T20:10:43.2005233Z SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in 
2022-06-14T20:10:43.2265125Z + retcode=1
2022-06-14T20:10:43.2265493Z + set -e
2022-06-14T20:10:43.2265770Z + return 1
2022-06-14T20:10:43.2268267Z + [[ linux-focal-py3.7-clang7-asan-default == *-NO_AVX-* ]]
2022-06-14T20:10:43.2268782Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X ]]
2022-06-14T20:10:43.2269403Z + [[ linux-focal-py3.7-clang7-asan-default == *-NO_AVX2-* ]]
2022-06-14T20:10:43.2269917Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]]
2022-06-14T20:10:43.2270543Z + [[ linux-focal-py3.7-clang7-asan-default == *-NO_AVX512-* ]]
2022-06-14T20:10:43.2271057Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\5\1\2 ]]
2022-06-14T20:10:43.2273757Z + [[ linux-focal-py3.7-clang7-asan-default == *tbb* ]]

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 2, 4, linux.4xlarge.nvidia.gpu) (6/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:57.4147776Z ##[error]The operation was canceled.
2022-06-14T21:21:09.9824472Z   test_comprehensive_special_chebyshev_polynomial_u_cuda_float32 (__main__.TestDecompCUDA) ... ok (2.808s)
2022-06-14T21:21:12.6464912Z   test_comprehensive_special_chebyshev_polynomial_u_cuda_float64 (__main__.TestDecompCUDA) ... ok (2.664s)
2022-06-14T21:21:12.6471324Z   test_comprehensive_special_chebyshev_polynomial_u_cuda_int16 (__main__.TestDecompCUDA) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s)
2022-06-14T21:21:12.6500385Z   test_comprehensive_special_chebyshev_polynomial_u_cuda_int32 (__main__.TestDecompCUDA) ... ok (0.003s)
2022-06-14T21:21:12.6528268Z   test_comprehensive_special_chebyshev_polynomial_u_cuda_int64 (__main__.TestDecompCUDA) ... ok (0.003s)
2022-06-14T21:21:12.6556581Z   test_comprehensive_special_chebyshev_polynomial_u_cuda_int8 (__main__.TestDecompCUDA) ... ok (0.003s)
2022-06-14T21:21:12.6584924Z   test_comprehensive_special_chebyshev_polynomial_u_cuda_uint8 (__main__.TestDecompCUDA) ... ok (0.003s)
2022-06-14T21:21:12.6591013Z   test_comprehensive_special_chebyshev_polynomial_v_cuda_bool (__main__.TestDecompCUDA) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s)
2022-06-14T21:21:15.2210178Z   test_comprehensive_special_chebyshev_polynomial_v_cuda_float32 (__main__.TestDecompCUDA) ... ok (2.562s)
2022-06-14T21:21:17.7864286Z   test_comprehensive_special_chebyshev_polynomial_v_cuda_float64 (__main__.TestDecompCUDA) ... ok (2.565s)
2022-06-14T21:22:57.4147776Z ##[error]The operation was canceled.
2022-06-14T21:22:57.4182319Z Prepare all required actions
2022-06-14T21:22:57.4182751Z Getting action download info
2022-06-14T21:22:57.5766494Z ##[group]Run ./.github/actions/get-workflow-job-id
2022-06-14T21:22:57.5766844Z with:
2022-06-14T21:22:57.5767353Z   github-token: ***
2022-06-14T21:22:57.5767619Z env:
2022-06-14T21:22:57.5767901Z   GIT_DEFAULT_BRANCH: master
2022-06-14T21:22:57.5768216Z   GPU_FLAG: --gpus all
2022-06-14T21:22:57.5768487Z ##[endgroup]
2022-06-14T21:22:57.5807021Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed, 1, 2, linux.8xlarge.nvidia.gpu) (7/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:57.3760975Z ##[error]The operation was canceled.
2022-06-14T21:22:52.8346383Z 
2022-06-14T21:22:52.8346523Z Generating XML reports...
2022-06-14T21:22:52.8390157Z Generated XML report: test-reports/dist-gloo/distributed.test_distributed_spawn/TEST-TestDistBackendWithSpawn-20220614212249.xml
2022-06-14T21:22:54.3798419Z Test results will be stored in test-reports/dist-gloo/distributed.test_distributed_spawn
2022-06-14T21:22:54.3814749Z 
2022-06-14T21:22:54.3815203Z Running tests...
2022-06-14T21:22:54.3815724Z ----------------------------------------------------------------------
2022-06-14T21:22:56.0948115Z   test_all_gather_complex (__main__.TestDistBackendWithSpawn) ... INFO:numba.cuda.cudadrv.driver:init
2022-06-14T21:22:56.1367180Z INFO:torch.testing._internal.common_distributed:Started process 0 with pid 48236
2022-06-14T21:22:56.1490359Z INFO:torch.testing._internal.common_distributed:Started process 1 with pid 48237
2022-06-14T21:22:57.3760975Z ##[error]The operation was canceled.
2022-06-14T21:22:57.3795431Z Prepare all required actions
2022-06-14T21:22:57.3795898Z Getting action download info
2022-06-14T21:22:57.5780766Z ##[group]Run ./.github/actions/get-workflow-job-id
2022-06-14T21:22:57.5781084Z with:
2022-06-14T21:22:57.5781541Z   github-token: ***
2022-06-14T21:22:57.5781768Z env:
2022-06-14T21:22:57.5782017Z   GIT_DEFAULT_BRANCH: master
2022-06-14T21:22:57.5782290Z   GPU_FLAG: --gpus all
2022-06-14T21:22:57.5782526Z ##[endgroup]
2022-06-14T21:22:57.5816653Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 1, 4, linux.4xlarge.nvidia.gpu) (8/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:57.3736704Z ##[error]The operation was canceled.
2022-06-14T21:04:39.7413451Z   test_non_standard_bool_values_special_bessel_j1_cuda_bool (__main__.TestCommonCUDA) ... ok (0.004s)
2022-06-14T21:04:39.7457145Z   test_non_standard_bool_values_special_bessel_y0_cuda_bool (__main__.TestCommonCUDA) ... ok (0.004s)
2022-06-14T21:04:39.7500630Z   test_non_standard_bool_values_special_bessel_y1_cuda_bool (__main__.TestCommonCUDA) ... ok (0.004s)
2022-06-14T21:09:54.5813941Z   test_non_standard_bool_values_special_chebyshev_polynomial_t_cuda_bool (__main__.TestCommonCUDA) ... ok (314.831s)
2022-06-14T21:09:54.5828401Z   test_non_standard_bool_values_special_chebyshev_polynomial_u_cuda_bool (__main__.TestCommonCUDA) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s)
2022-06-14T21:15:15.6564350Z   test_non_standard_bool_values_special_chebyshev_polynomial_v_cuda_bool (__main__.TestCommonCUDA) ... ok (321.073s)
2022-06-14T21:20:37.1204345Z   test_non_standard_bool_values_special_chebyshev_polynomial_w_cuda_bool (__main__.TestCommonCUDA) ... ok (321.464s)
2022-06-14T21:20:37.1240981Z   test_non_standard_bool_values_special_entr_cuda_bool (__main__.TestCommonCUDA) ... ok (0.004s)
2022-06-14T21:20:37.1286864Z   test_non_standard_bool_values_special_erfcx_cuda_bool (__main__.TestCommonCUDA) ... ok (0.004s)
2022-06-14T21:20:37.1304697Z   test_non_standard_bool_values_special_hermite_polynomial_h_cuda_bool (__main__.TestCommonCUDA) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s)
2022-06-14T21:22:57.3736704Z ##[error]The operation was canceled.
2022-06-14T21:22:57.3790742Z Prepare all required actions
2022-06-14T21:22:57.3791540Z Getting action download info
2022-06-14T21:22:57.5958046Z ##[group]Run ./.github/actions/get-workflow-job-id
2022-06-14T21:22:57.5958354Z with:
2022-06-14T21:22:57.5958781Z   github-token: ***
2022-06-14T21:22:57.5959010Z env:
2022-06-14T21:22:57.5959261Z   GIT_DEFAULT_BRANCH: master
2022-06-14T21:22:57.5959534Z   GPU_FLAG: --gpus all
2022-06-14T21:22:57.5959768Z ##[endgroup]
2022-06-14T21:22:57.5994026Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a

See GitHub Actions build pull / pytorch-xla-linux-bionic-py3.7-clang8 / test (xla, 1, 1, linux.2xlarge) (9/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:57.2909191Z ##[error]The operation was canceled.
2022-06-14T21:22:50.9174821Z �[1A�[K
2022-06-14T21:22:50.9175226Z �[1A�[K�[32m[10,617 / 11,034]�[0m 8 actions running
2022-06-14T21:22:50.9175870Z     Compiling tensorflow/core/kernels/conv_ops.cc; 123s processwrapper-sandbox
2022-06-14T21:22:50.9176499Z     Compiling .../optimizers/meta_optimizer.cc; 14s processwrapper-sandbox
2022-06-14T21:22:50.9177138Z     Compiling .../tensorflow/ir/tf_remaining_ops.cc; 8s processwrapper-sandbox
2022-06-14T21:22:50.9177775Z     Compiling .../mlir/tensorflow/ir/tfrt_ops.cc; 7s processwrapper-sandbox
2022-06-14T21:22:50.9178450Z     Compiling .../tensorflow/ir/tf_op_interfaces.cc; 6s processwrapper-sandbox
2022-06-14T21:22:50.9179152Z     Compiling .../tensorflow/ir/tf_verifiers.cc; 4s processwrapper-sandbox
2022-06-14T21:22:50.9179829Z     .../mlir/tensorflow:tf_ops_layout_helper; 3s processwrapper-sandbox
2022-06-14T21:22:50.9180510Z     Compiling .../mlir/tensorflow/ir/tf_ops_a_m.cc; 0s processwrapper-sandbox
2022-06-14T21:22:57.2909191Z ##[error]The operation was canceled.
2022-06-14T21:22:57.3010715Z Prepare all required actions
2022-06-14T21:22:57.3011247Z Getting action download info
2022-06-14T21:22:57.5074551Z ##[group]Run ./.github/actions/get-workflow-job-id
2022-06-14T21:22:57.5074929Z with:
2022-06-14T21:22:57.5075599Z   github-token: ***
2022-06-14T21:22:57.5075897Z env:
2022-06-14T21:22:57.5076219Z   GIT_DEFAULT_BRANCH: master
2022-06-14T21:22:57.5076559Z ##[endgroup]
2022-06-14T21:22:57.5120900Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a
2022-06-14T21:22:57.5121321Z with:

See GitHub Actions build pull / linux-docs / build-docs (cpp) (10/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:57.3679266Z ##[error]The operation was canceled.
2022-06-14T21:03:04.8164493Z �[01mwriting output... �[39;49;00m[ 42%] �[32mapi/function_namespaceat_1a7f91ee477399ced797e380d7c1c92327 .. api/function_namespaceat_1a8644c91525d0722622ec4d8dd4518053�[39;49;00m
2022-06-14T21:04:53.6639982Z �[01mwriting output... �[39;49;00m[ 44%] �[32mapi/function_namespaceat_1a868365503b5baa36722dad3c6ded707a .. api/function_namespaceat_1a8e860f99bd88e7f1f244c528ac8cbd8e�[39;49;00m
2022-06-14T21:06:47.7653814Z �[01mwriting output... �[39;49;00m[ 46%] �[32mapi/function_namespaceat_1a8e9e8ac5428003a5f23d1408506f29ac .. api/function_namespaceat_1a946071a2debcb352ece0948f1acafccb�[39;49;00m
2022-06-14T21:08:39.9417477Z �[01mwriting output... �[39;49;00m[ 47%] �[32mapi/function_namespaceat_1a94628f42921931ace5ca078fbc15f785 .. api/function_namespaceat_1a99ca074314c7ebb589fff2b6ef02286f�[39;49;00m
2022-06-14T21:10:21.2464484Z �[01mwriting output... �[39;49;00m[ 49%] �[32mapi/function_namespaceat_1a99dca1b668717d30e91d0363fa0cf0c0 .. api/function_namespaceat_1aa10cf0aaff07f0a75dbfa51f168f563d�[39;49;00m
2022-06-14T21:12:17.8585917Z �[01mwriting output... �[39;49;00m[ 50%] �[32mapi/function_namespaceat_1aa10f3e047625f95c27d40f4c95b2f14a .. api/function_namespaceat_1aa8e296313773d050efc2ea2b27e19012�[39;49;00m
2022-06-14T21:14:26.9959089Z �[01mwriting output... �[39;49;00m[ 52%] �[32mapi/function_namespaceat_1aa8e5360816047f22aef9d01f72f3a7ce .. api/function_namespaceat_1ab0cdf1b4ef0c1ec715602a6e9c8e2cd7�[39;49;00m
2022-06-14T21:16:40.5669427Z �[01mwriting output... �[39;49;00m[ 53%] �[32mapi/function_namespaceat_1ab0e57cb6b48fc02c4f5620f52d182a62 .. api/function_namespaceat_1ab867c0e8f788ac41be63f8ebf1b0ea3d�[39;49;00m
2022-06-14T21:18:51.1279606Z �[01mwriting output... �[39;49;00m[ 55%] �[32mapi/function_namespaceat_1ab8becdf34b96dde0ed01108f4a2c08a6 .. api/function_namespaceat_1abecf5751dea9ae164af07727a8dfee26�[39;49;00m
2022-06-14T21:21:17.1169000Z �[01mwriting output... �[39;49;00m[ 57%] �[32mapi/function_namespaceat_1abef5c86e464a95711ae32887c1125b2d .. api/function_namespaceat_1ac54a051600345298cd9cbadd513451e4�[39;49;00m
2022-06-14T21:22:57.3679266Z ##[error]The operation was canceled.
2022-06-14T21:22:57.3706428Z Prepare all required actions
2022-06-14T21:22:57.3733611Z ##[group]Run ./.github/actions/chown-workspace
2022-06-14T21:22:57.3733943Z ##[endgroup]
2022-06-14T21:22:57.3755186Z ##[group]Run docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
2022-06-14T21:22:57.3755741Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-06-14T21:22:57.3784326Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
2022-06-14T21:22:57.3784681Z env:
2022-06-14T21:22:57.3785065Z   ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine
2022-06-14T21:22:57.3785444Z ##[endgroup]
2022-06-14T21:22:57.4150844Z Unable to find image '308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine:latest' locally

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 3, 4, linux.4xlarge.nvidia.gpu) (11/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:57.8127378Z ##[error]The operation was canceled.
2022-06-14T21:22:46.1475238Z   test_dispatch_meta_special_entr_cuda_int8 (__main__.TestMetaCUDA) ... ok (0.004s)
2022-06-14T21:22:46.1519004Z   test_dispatch_meta_special_entr_cuda_uint8 (__main__.TestMetaCUDA) ... ok (0.004s)
2022-06-14T21:22:47.7458002Z   test_dispatch_meta_special_erfcx_cuda_bool (__main__.TestMetaCUDA) ... ok (1.594s)
2022-06-14T21:22:50.1111191Z   test_dispatch_meta_special_erfcx_cuda_float32 (__main__.TestMetaCUDA) ... ok (2.365s)
2022-06-14T21:22:52.3509435Z   test_dispatch_meta_special_erfcx_cuda_float64 (__main__.TestMetaCUDA) ... ok (2.240s)
2022-06-14T21:22:52.3566802Z   test_dispatch_meta_special_erfcx_cuda_int16 (__main__.TestMetaCUDA) ... ok (0.006s)
2022-06-14T21:22:52.3624143Z   test_dispatch_meta_special_erfcx_cuda_int32 (__main__.TestMetaCUDA) ... ok (0.006s)
2022-06-14T21:22:52.3681915Z   test_dispatch_meta_special_erfcx_cuda_int64 (__main__.TestMetaCUDA) ... ok (0.006s)
2022-06-14T21:22:52.3739089Z   test_dispatch_meta_special_erfcx_cuda_int8 (__main__.TestMetaCUDA) ... ok (0.006s)
2022-06-14T21:22:52.3796496Z   test_dispatch_meta_special_erfcx_cuda_uint8 (__main__.TestMetaCUDA) ... ok (0.006s)
2022-06-14T21:22:57.8127378Z ##[error]The operation was canceled.
2022-06-14T21:22:57.8181169Z Prepare all required actions
2022-06-14T21:22:57.8181959Z Getting action download info
2022-06-14T21:22:57.9650653Z ##[group]Run ./.github/actions/get-workflow-job-id
2022-06-14T21:22:57.9650964Z with:
2022-06-14T21:22:57.9651410Z   github-token: ***
2022-06-14T21:22:57.9651640Z env:
2022-06-14T21:22:57.9651889Z   GIT_DEFAULT_BRANCH: master
2022-06-14T21:22:57.9652162Z   GPU_FLAG: --gpus all
2022-06-14T21:22:57.9652396Z ##[endgroup]
2022-06-14T21:22:57.9688072Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a

See GitHub Actions build pull / win-vs2019-cpu-py3 / test (default, 1, 2, windows.4xlarge) (12/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:45.0965186Z RuntimeError: test_binary_ufuncs failed!
2022-06-14T21:22:44.6294897Z   File "test_binary_ufuncs.py", line 158, in _test_reference_numerics
2022-06-14T21:22:44.6295123Z     actual = op(l, r)
2022-06-14T21:22:44.6295496Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_methods_invocations.py", line 952, in __call__
2022-06-14T21:22:44.6295838Z     return self.op(*args, **kwargs)
2022-06-14T21:22:44.6296017Z KeyboardInterrupt
2022-06-14T21:22:45.0963895Z Traceback (most recent call last):
2022-06-14T21:22:45.0964307Z   File "run_test.py", line 946, in <module>
2022-06-14T21:22:45.0964509Z     main()
2022-06-14T21:22:45.0964723Z   File "run_test.py", line 924, in main
2022-06-14T21:22:45.0964963Z     raise RuntimeError(err_message)
2022-06-14T21:22:45.0965186Z RuntimeError: test_binary_ufuncs failed!
2022-06-14T21:22:45.3827403Z Terminate batch job (Y/N)? 
2022-06-14T21:22:45.3828186Z 
2022-06-14T21:22:45.3828463Z (base) C:\actions-runner\_work\pytorch\pytorch\test>if ERRORLEVEL 1 goto fail 
2022-06-14T21:22:45.3830936Z 
2022-06-14T21:22:45.3831406Z (base) C:\actions-runner\_work\pytorch\pytorch\test>exit /b 1 
2022-06-14T21:22:45.3857221Z + cleanup
2022-06-14T21:22:45.3857425Z + retcode=1
2022-06-14T21:22:45.3857585Z + set +x
2022-06-14T21:22:45.6031869Z ##[error]The operation was canceled.
2022-06-14T21:22:45.6624353Z Prepare all required actions

See GitHub Actions build pull / win-vs2019-cpu-py3 / test (default, 2, 2, windows.4xlarge) (13/17)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:22:45.6810640Z RuntimeError: test_unary_ufuncs failed!
2022-06-14T21:22:45.1495766Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_methods_invocations.py", line 2751, in generate_elementwise_unary_small_value_tensors
2022-06-14T21:22:45.1496223Z     for sample in generate_elementwise_binary_small_value_tensors(
2022-06-14T21:22:45.1496745Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_methods_invocations.py", line 2158, in generate_elementwise_binary_small_value_tensors
2022-06-14T21:22:45.1497227Z     rhs = torch.tensor(r_vals, device=device, dtype=dtype, requires_grad=requires_grad)
2022-06-14T21:22:45.1497479Z KeyboardInterrupt
2022-06-14T21:22:45.6809376Z Traceback (most recent call last):
2022-06-14T21:22:45.6809757Z   File "run_test.py", line 946, in <module>
2022-06-14T21:22:45.6809951Z     main()
2022-06-14T21:22:45.6810178Z   File "run_test.py", line 924, in main
2022-06-14T21:22:45.6810423Z     raise RuntimeError(err_message)
2022-06-14T21:22:45.6810640Z RuntimeError: test_unary_ufuncs failed!
2022-06-14T21:22:45.9860705Z Terminate batch job (Y/N)? 
2022-06-14T21:22:45.9861499Z 
2022-06-14T21:22:45.9862003Z (base) C:\actions-runner\_work\pytorch\pytorch\test>if ERRORLEVEL 1 goto fail 
2022-06-14T21:22:45.9864300Z 
2022-06-14T21:22:45.9864879Z (base) C:\actions-runner\_work\pytorch\pytorch\test>exit /b 1 
2022-06-14T21:22:45.9894500Z + cleanup
2022-06-14T21:22:45.9894996Z + retcode=1
2022-06-14T21:22:45.9895176Z + set +x
2022-06-14T21:22:46.1993472Z ##[error]The operation was canceled.
2022-06-14T21:22:46.2527753Z Prepare all required actions

See GitHub Actions build pull / linux-bionic-py3.7-clang9 / test (default, 2, 2, linux.2xlarge) (14/17)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-14T20:50:22.3576660Z AssertionError: Th...eturned by torch._overrides.get_ignored_functions.
2022-06-14T20:50:21.3821779Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestStandaloneCPPJIT-20220614205006.xml
2022-06-14T20:50:21.3825669Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestBottleneck-20220614205006.xml
2022-06-14T20:50:21.3828536Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestCollectEnv-20220614205006.xml
2022-06-14T20:50:21.7238838Z Running test_overrides ... [2022-06-14 20:50:21.723438]
2022-06-14T20:50:21.7239410Z Executing ['/opt/conda/bin/python', 'test_overrides.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2022-06-14 20:50:21.723527]
2022-06-14T20:50:22.3574622Z Traceback (most recent call last):
2022-06-14T20:50:22.3575061Z   File "test_overrides.py", line 357, in <module>
2022-06-14T20:50:22.3575428Z     generate_tensor_like_torch_implementations()
2022-06-14T20:50:22.3575753Z   File "test_overrides.py", line 345, in generate_tensor_like_torch_implementations
2022-06-14T20:50:22.3576084Z     assert len(untested_funcs) == 0, msg.format(pprint.pformat(untested_funcs))
2022-06-14T20:50:22.3576660Z AssertionError: The following functions are not tested for __torch_function__ support, please ensure there is an entry in the dict returned by torch._overrides.get_testing_overrides for this function or if a __torch_function__ override does not make sense, add an entry to the tuple returned by torch._overrides.get_ignored_functions.
2022-06-14T20:50:22.3577154Z 
2022-06-14T20:50:22.3577408Z ["<module 'torch.nn.functional' from "
2022-06-14T20:50:22.3577777Z  "'/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py'>.dropout1d"]
2022-06-14T20:50:22.5017052Z Traceback (most recent call last):
2022-06-14T20:50:22.5017315Z   File "test/run_test.py", line 946, in <module>
2022-06-14T20:50:22.5019248Z     main()
2022-06-14T20:50:22.5019473Z   File "test/run_test.py", line 924, in main
2022-06-14T20:50:22.5021187Z     raise RuntimeError(err_message)
2022-06-14T20:50:22.5021497Z RuntimeError: test_overrides failed!
2022-06-14T20:50:22.8304564Z 

See GitHub Actions build pull / linux-focal-py3.7-clang7-asan / test (default, 4, 5, linux.2xlarge) (15/17)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:05:41.2611697Z AssertionError: Th...eturned by torch._overrides.get_ignored_functions.
2022-06-14T21:05:38.8151424Z 
2022-06-14T21:05:38.8151510Z Generating XML reports...
2022-06-14T21:05:38.8184244Z Generated XML report: test-reports/python-unittest/test_openmp/TEST-TestOpenMP_ParallelFor-20220614210538.xml
2022-06-14T21:05:39.1544173Z Running test_overrides ... [2022-06-14 21:05:39.154018]
2022-06-14T21:05:39.1544671Z Executing ['/opt/conda/bin/python', 'test_overrides.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2022-06-14 21:05:39.154111]
2022-06-14T21:05:41.2607213Z Traceback (most recent call last):
2022-06-14T21:05:41.2607649Z   File "test_overrides.py", line 357, in <module>
2022-06-14T21:05:41.2608108Z     generate_tensor_like_torch_implementations()
2022-06-14T21:05:41.2608482Z   File "test_overrides.py", line 345, in generate_tensor_like_torch_implementations
2022-06-14T21:05:41.2610905Z     assert len(untested_funcs) == 0, msg.format(pprint.pformat(untested_funcs))
2022-06-14T21:05:41.2611697Z AssertionError: The following functions are not tested for __torch_function__ support, please ensure there is an entry in the dict returned by torch._overrides.get_testing_overrides for this function or if a __torch_function__ override does not make sense, add an entry to the tuple returned by torch._overrides.get_ignored_functions.
2022-06-14T21:05:41.2612121Z 
2022-06-14T21:05:41.2612371Z ["<module 'torch.nn.functional' from "
2022-06-14T21:05:41.2612739Z  "'/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py'>.dropout1d"]
2022-06-14T21:05:41.5804258Z Traceback (most recent call last):
2022-06-14T21:05:41.5804573Z   File "test/run_test.py", line 946, in <module>
2022-06-14T21:05:41.5827850Z     main()
2022-06-14T21:05:41.5828081Z   File "test/run_test.py", line 924, in main
2022-06-14T21:05:41.5831408Z     raise RuntimeError(err_message)
2022-06-14T21:05:41.5831952Z RuntimeError: test_overrides failed!
2022-06-14T21:05:42.1025996Z 

See GitHub Actions build pull / linux-bionic-py3.7-clang9 / test (crossref, 2, 2, linux.2xlarge) (16/17)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:18:13.1325048Z AssertionError: Th...eturned by torch._overrides.get_ignored_functions.
2022-06-14T21:18:12.2849280Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestStandaloneCPPJIT-20220614211810.xml
2022-06-14T21:18:12.2853184Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestBottleneck-20220614211810.xml
2022-06-14T21:18:12.2855791Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestCollectEnv-20220614211810.xml
2022-06-14T21:18:12.5330312Z Running test_overrides ... [2022-06-14 21:18:12.532622]
2022-06-14T21:18:12.5330867Z Executing ['/opt/conda/bin/python', 'test_overrides.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2022-06-14 21:18:12.532711]
2022-06-14T21:18:13.1321901Z Traceback (most recent call last):
2022-06-14T21:18:13.1322337Z   File "test_overrides.py", line 357, in <module>
2022-06-14T21:18:13.1322804Z     generate_tensor_like_torch_implementations()
2022-06-14T21:18:13.1323349Z   File "test_overrides.py", line 345, in generate_tensor_like_torch_implementations
2022-06-14T21:18:13.1323987Z     assert len(untested_funcs) == 0, msg.format(pprint.pformat(untested_funcs))
2022-06-14T21:18:13.1325048Z AssertionError: The following functions are not tested for __torch_function__ support, please ensure there is an entry in the dict returned by torch._overrides.get_testing_overrides for this function or if a __torch_function__ override does not make sense, add an entry to the tuple returned by torch._overrides.get_ignored_functions.
2022-06-14T21:18:13.1325856Z 
2022-06-14T21:18:13.1326220Z ["<module 'torch.nn.functional' from "
2022-06-14T21:18:13.1326878Z  "'/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py'>.dropout1d"]
2022-06-14T21:18:13.2429876Z Traceback (most recent call last):
2022-06-14T21:18:13.2430152Z   File "test/run_test.py", line 946, in <module>
2022-06-14T21:18:13.2431984Z     main()
2022-06-14T21:18:13.2432184Z   File "test/run_test.py", line 924, in main
2022-06-14T21:18:13.2434136Z     raise RuntimeError(err_message)
2022-06-14T21:18:13.2434585Z RuntimeError: test_overrides failed!
2022-06-14T21:18:13.5155464Z 

See GitHub Actions build pull / linux-focal-py3.7-gcc7 / test (default, 1, 2, linux.2xlarge) (17/17)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-14T21:11:10.6268895Z AssertionError: Th...eturned by torch._overrides.get_ignored_functions.
2022-06-14T21:11:09.7532314Z 
2022-06-14T21:11:09.7532402Z Generating XML reports...
2022-06-14T21:11:09.7623101Z Generated XML report: test-reports/python-unittest/test_sort_and_select/TEST-TestSortAndSelectCPU-20220614211107.xml
2022-06-14T21:11:09.9794531Z Running test_overrides ... [2022-06-14 21:11:09.979034]
2022-06-14T21:11:09.9795039Z Executing ['/opt/conda/bin/python', 'test_overrides.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2022-06-14 21:11:09.979116]
2022-06-14T21:11:10.6266769Z Traceback (most recent call last):
2022-06-14T21:11:10.6267167Z   File "test_overrides.py", line 357, in <module>
2022-06-14T21:11:10.6267569Z     generate_tensor_like_torch_implementations()
2022-06-14T21:11:10.6267935Z   File "test_overrides.py", line 345, in generate_tensor_like_torch_implementations
2022-06-14T21:11:10.6268273Z     assert len(untested_funcs) == 0, msg.format(pprint.pformat(untested_funcs))
2022-06-14T21:11:10.6268895Z AssertionError: The following functions are not tested for __torch_function__ support, please ensure there is an entry in the dict returned by torch._overrides.get_testing_overrides for this function or if a __torch_function__ override does not make sense, add an entry to the tuple returned by torch._overrides.get_ignored_functions.
2022-06-14T21:11:10.6269470Z 
2022-06-14T21:11:10.6269695Z ["<module 'torch.nn.functional' from "
2022-06-14T21:11:10.6270068Z  "'/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py'>.dropout1d"]
2022-06-14T21:11:10.7541642Z Traceback (most recent call last):
2022-06-14T21:11:10.7541901Z   File "test/run_test.py", line 946, in <module>
2022-06-14T21:11:10.7543996Z     main()
2022-06-14T21:11:10.7544194Z   File "test/run_test.py", line 924, in main
2022-06-14T21:11:10.7547057Z     raise RuntimeError(err_message)
2022-06-14T21:11:10.7547435Z RuntimeError: test_overrides failed!
2022-06-14T21:11:11.0563439Z 

🕵️‍♀️ 1 failure not recognized by patterns:

The following CI failures may be due to changes from the PR
Job Step Action
GitHub Actions pull / linux-focal-py3.7-clang7-asan / test (default, 5, 5, linux.2xlarge) Unknown 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Fixes #6442

[ghstack-poisoned]
Fixes #6442

[ghstack-poisoned]
Fixes #6442

[ghstack-poisoned]
Fixes #6442

[ghstack-poisoned]
Comment thread torch/nn/functional.py
return _VF.alpha_dropout_(input, p, training) if inplace else _VF.alpha_dropout(input, p, training)


def dropout1d(input: Tensor, p: float = 0.5, training: bool = True, inplace: bool = False) -> Tensor:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should there be feature_dropout_nd function that factors out common code between 1d/2d/3d? Doesn't need to happen now

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flagged for later - this should definitely happen



class Dropout1d(_DropoutNd):
r"""Randomly zero out entire channels (a channel is a 1D feature map,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, doc strings can probably be shared between 1d/2d/3d

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also flagged

Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@jbschlosser
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link
Copy Markdown
Contributor

Hey @jbschlosser.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@jbschlosser jbschlosser added release notes: nn release notes category topic: improvements topic category labels Jun 15, 2022
This was referenced Jun 15, 2022
atalman pushed a commit that referenced this pull request Jun 17, 2022
Pull Request resolved: #79545

Approved by: https://github.com/ngimel, https://github.com/albanD

Co-authored-by: Joel Benjamin Schlosser <jbschlosser@fb.com>
@facebook-github-bot facebook-github-bot deleted the gh/jbschlosser/39/head branch June 19, 2022 14:16
facebook-github-bot pushed a commit that referenced this pull request Jun 20, 2022
Summary:
Pull Request resolved: #79545

Approved by: https://github.com/ngimel, https://github.com/albanD

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/2d73c8e6e0378655f732a48ec50ae1908ce0a4a4

Reviewed By: malfet

Differential Revision: D37208288

Pulled By: jbschlosser

fbshipit-source-id: df5e95c9a305e50abc0fbd73dbfc63fbeb173f8d
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants