Skip to content

Fix a typo: add a missing space#69630

Closed
nkaretnikov wants to merge 1 commit intogh/nkaretnikov/3/basefrom
gh/nkaretnikov/3/head
Closed

Fix a typo: add a missing space#69630
nkaretnikov wants to merge 1 commit intogh/nkaretnikov/3/basefrom
gh/nkaretnikov/3/head

Conversation

@pytorch-probot
Copy link

pytorch-probot bot commented Dec 8, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/6fa129a04875432f9918f6dcd3bf6e8bdb2fe9f0/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-cuda11.5-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
libtorch-linux-bionic-cuda11.5-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 8, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 6fa129a (more details on the Dr. CI page):


  • 13/13 failures introduced in this PR

🕵️ 13 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (default, 1, 2, linux.2xlarge) (1/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T21:29:08.2026651Z AssertionError: can only test a child process
2021-12-08T21:29:08.1869342Z     assert self._parent_pid == os.getpid(), 'can only test a child process'
2021-12-08T21:29:08.1870351Z AssertionError: can only test a child process
2021-12-08T21:29:08.2014874Z Exception ignored in: <bound method _MultiProcessingDataLoaderIter.__del__ of <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x7f40db502cf8>>
2021-12-08T21:29:08.2016815Z Traceback (most recent call last):
2021-12-08T21:29:08.2018961Z   File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
2021-12-08T21:29:08.2020138Z     self._shutdown_workers()
2021-12-08T21:29:08.2021557Z   File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1320, in _shutdown_workers
2021-12-08T21:29:08.2023158Z     if w.is_alive():
2021-12-08T21:29:08.2023765Z   File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 134, in is_alive
2021-12-08T21:29:08.2026045Z     assert self._parent_pid == os.getpid(), 'can only test a child process'
2021-12-08T21:29:08.2026651Z AssertionError: can only test a child process
2021-12-08T21:29:08.8295011Z ok (0.658s)
2021-12-08T21:29:08.8319406Z   test_iterabledataset_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s)
2021-12-08T21:29:10.4273742Z   test_large_sampler_indices (__main__.TestDataLoaderPersistentWorkers) ... ok (1.595s)
2021-12-08T21:29:10.4345276Z   test_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.007s)
2021-12-08T21:29:11.9128134Z   test_multiple_dataloaders (__main__.TestDataLoaderPersistentWorkers) ... ok (1.478s)
2021-12-08T21:29:15.1399844Z   test_multiprocessing_contexts (__main__.TestDataLoaderPersistentWorkers) ... ok (3.227s)
2021-12-08T21:29:15.9383263Z   test_no_segfault (__main__.TestDataLoaderPersistentWorkers) ... ok (0.798s)
2021-12-08T21:29:15.9418006Z   test_numpy (__main__.TestDataLoaderPersistentWorkers) ... ok (0.004s)
2021-12-08T21:29:15.9447307Z   test_numpy_gen_state (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s)
2021-12-08T21:29:15.9471908Z   test_numpy_scalars (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s)

See GitHub Actions build win-vs2019-cuda11.3-py3 / test (force_on_cpu, 1, 1, windows.4xlarge) (2/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T23:25:08.8680268Z FAIL [0.016s]: tes...d_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T23:25:08.8674493Z     raise rte
2021-12-08T23:25:08.8675243Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 376, in instantiated_test
2021-12-08T23:25:08.8675976Z     result = test(self, **param_kwargs)
2021-12-08T23:25:08.8676529Z   File "test_tensor_creation_ops.py", line 3327, in test_normal
2021-12-08T23:25:08.8677159Z     helper(self, device, dtype, lambda x: x, lambda t: t, lambda mean: mean)
2021-12-08T23:25:08.8677857Z   File "test_tensor_creation_ops.py", line 3267, in helper
2021-12-08T23:25:08.8678411Z     out = torch.normal(mean=torch.empty((0, 2)), std=torch.empty((0, 1)))
2021-12-08T23:25:08.8679035Z RuntimeError: inconsistent tensor, output size ([0, 2]) is not the same as input size ([0, 1])
2021-12-08T23:25:08.8679434Z 
2021-12-08T23:25:08.8679713Z ======================================================================
2021-12-08T23:25:08.8680268Z FAIL [0.016s]: test_normal_std_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T23:25:08.8680936Z ----------------------------------------------------------------------
2021-12-08T23:25:08.8681457Z RuntimeError: normal expects std >= 0.0, but found std -1
2021-12-08T23:25:08.8681746Z 
2021-12-08T23:25:08.8682187Z During handling of the above exception, another exception occurred:
2021-12-08T23:25:08.8682551Z 
2021-12-08T23:25:08.8682852Z Traceback (most recent call last):
2021-12-08T23:25:08.8683674Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 376, in instantiated_test
2021-12-08T23:25:08.8684405Z     result = test(self, **param_kwargs)
2021-12-08T23:25:08.8684949Z   File "test_tensor_creation_ops.py", line 3336, in test_normal_std_error
2021-12-08T23:25:08.8685424Z     torch.normal(input, -1, (10,))

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (default, 2, 2, linux.2xlarge) (3/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T21:28:53.6704155Z FAIL [0.006s]: tes...d_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T21:28:53.6698853Z     raise rte
2021-12-08T21:28:53.6699602Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T21:28:53.6700262Z     result = test(self, **param_kwargs)
2021-12-08T21:28:53.6700708Z   File "test_tensor_creation_ops.py", line 3327, in test_normal
2021-12-08T21:28:53.6701255Z     helper(self, device, dtype, lambda x: x, lambda t: t, lambda mean: mean)
2021-12-08T21:28:53.6701782Z   File "test_tensor_creation_ops.py", line 3267, in helper
2021-12-08T21:28:53.6702321Z     out = torch.normal(mean=torch.empty((0, 2)), std=torch.empty((0, 1)))
2021-12-08T21:28:53.6702957Z RuntimeError: inconsistent tensor, output size ([0, 2]) is not the same as input size ([0, 1])
2021-12-08T21:28:53.6703332Z 
2021-12-08T21:28:53.6703605Z ======================================================================
2021-12-08T21:28:53.6704155Z FAIL [0.006s]: test_normal_std_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T21:28:53.6704909Z ----------------------------------------------------------------------
2021-12-08T21:28:53.6705525Z RuntimeError: normal expects std >= 0.0, but found std -1
2021-12-08T21:28:53.6705830Z 
2021-12-08T21:28:53.6706255Z During handling of the above exception, another exception occurred:
2021-12-08T21:28:53.6706617Z 
2021-12-08T21:28:53.6706932Z Traceback (most recent call last):
2021-12-08T21:28:53.6707717Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T21:28:53.6708378Z     result = test(self, **param_kwargs)
2021-12-08T21:28:53.6708846Z   File "test_tensor_creation_ops.py", line 3336, in test_normal_std_error
2021-12-08T21:28:53.6709406Z     torch.normal(input, -1, (10,))

See GitHub Actions build linux-bionic-py3.6-clang9 / test (default, 1, 2, linux.2xlarge) (4/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T21:33:11.8956219Z AssertionError: can only test a child process
2021-12-08T21:33:11.8734066Z     assert self._parent_pid == os.getpid(), 'can only test a child process'
2021-12-08T21:33:11.8734976Z AssertionError: can only test a child process
2021-12-08T21:33:11.8945233Z Exception ignored in: <bound method _MultiProcessingDataLoaderIter.__del__ of <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x7fcb538d1978>>
2021-12-08T21:33:11.8946394Z Traceback (most recent call last):
2021-12-08T21:33:11.8947492Z   File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
2021-12-08T21:33:11.8948627Z     self._shutdown_workers()
2021-12-08T21:33:11.8950150Z   File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1320, in _shutdown_workers
2021-12-08T21:33:11.8952537Z     if w.is_alive():
2021-12-08T21:33:11.8953334Z   File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 134, in is_alive
2021-12-08T21:33:11.8955302Z     assert self._parent_pid == os.getpid(), 'can only test a child process'
2021-12-08T21:33:11.8956219Z AssertionError: can only test a child process
2021-12-08T21:33:12.5610156Z ok (0.702s)
2021-12-08T21:33:12.5631711Z   test_iterabledataset_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s)
2021-12-08T21:33:14.1569921Z   test_large_sampler_indices (__main__.TestDataLoaderPersistentWorkers) ... ok (1.594s)
2021-12-08T21:33:14.1644340Z   test_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.007s)
2021-12-08T21:33:15.7276760Z   test_multiple_dataloaders (__main__.TestDataLoaderPersistentWorkers) ... ok (1.563s)
2021-12-08T21:33:19.0140490Z   test_multiprocessing_contexts (__main__.TestDataLoaderPersistentWorkers) ... ok (3.286s)
2021-12-08T21:33:19.8426648Z   test_no_segfault (__main__.TestDataLoaderPersistentWorkers) ... ok (0.828s)
2021-12-08T21:33:19.8460348Z   test_numpy (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s)
2021-12-08T21:33:19.8489572Z   test_numpy_gen_state (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s)
2021-12-08T21:33:19.8514127Z   test_numpy_scalars (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s)

See GitHub Actions build linux-bionic-py3.6-clang9 / test (xla, 1, 1, linux.2xlarge) (5/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T21:55:12.3167587Z RuntimeError: Connection reset by peer
2021-12-08T21:55:12.3155404Z   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.11-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 77, in __init__
2021-12-08T21:55:12.3156191Z     self.replica_devcount, self.world_size)
2021-12-08T21:55:12.3157195Z   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.11-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 112, in _make_interhost_group
2021-12-08T21:55:12.3157968Z     return _get_torch_dist_group(ranks), ranks
2021-12-08T21:55:12.3158916Z   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.11-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 92, in _get_torch_dist_group
2021-12-08T21:55:12.3159659Z     pg = dist.new_group(ranks=ranks)
2021-12-08T21:55:12.3160468Z   File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2964, in new_group
2021-12-08T21:55:12.3164250Z     timeout=timeout,
2021-12-08T21:55:12.3165386Z   File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 701, in _new_process_group_helper
2021-12-08T21:55:12.3166629Z     pg = ProcessGroupGloo(prefix_store, rank, world_size, timeout=timeout)
2021-12-08T21:55:12.3167587Z RuntimeError: Connection reset by peer
2021-12-08T21:55:12.3294462Z Traceback (most recent call last):
2021-12-08T21:55:12.3295343Z   File "/var/lib/jenkins/workspace/xla/test/test_mp_sync_batch_norm.py", line 146, in <module>
2021-12-08T21:55:12.3295919Z     xmp.spawn(_mp_fn, args=())
2021-12-08T21:55:12.3297108Z   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.11-py3.6-linux-x86_64.egg/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn
2021-12-08T21:55:12.3297922Z     start_method=start_method)
2021-12-08T21:55:12.3298732Z   File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
2021-12-08T21:55:12.3299409Z     while not context.join():
2021-12-08T21:55:12.3300162Z   File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 144, in join
2021-12-08T21:55:12.3300772Z     exit_code=exitcode
2021-12-08T21:55:12.3301954Z torch.multiprocessing.spawn.ProcessExitedException: process 2 terminated with exit code 17

See GitHub Actions build linux-bionic-py3.6-clang9 / test (noarch, 1, 1, linux.2xlarge) (6/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T22:02:42.6746728Z FAIL [0.006s]: tes...error_meta (__main__.TestRandomTensorCreationMETA)
2021-12-08T22:02:42.6741403Z During handling of the above exception, another exception occurred:
2021-12-08T22:02:42.6741781Z 
2021-12-08T22:02:42.6742102Z Traceback (most recent call last):
2021-12-08T22:02:42.6742916Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T22:02:42.6743591Z     result = test(self, **param_kwargs)
2021-12-08T22:02:42.6744070Z   File "test_tensor_creation_ops.py", line 3336, in test_normal_std_error
2021-12-08T22:02:42.6744628Z     torch.normal(input, -1, (10,))
2021-12-08T22:02:42.6745321Z AssertionError: "normal_ expects std >= 0.0" does not match "normal expects std >= 0.0, but found std -1"
2021-12-08T22:02:42.6745810Z 
2021-12-08T22:02:42.6746089Z ======================================================================
2021-12-08T22:02:42.6746728Z FAIL [0.006s]: test_normal_std_error_meta (__main__.TestRandomTensorCreationMETA)
2021-12-08T22:02:42.6747522Z ----------------------------------------------------------------------
2021-12-08T22:02:42.6748143Z RuntimeError: normal expects std >= 0.0, but found std -1
2021-12-08T22:02:42.6748454Z 
2021-12-08T22:02:42.6748888Z During handling of the above exception, another exception occurred:
2021-12-08T22:02:42.6749261Z 
2021-12-08T22:02:42.6749584Z Traceback (most recent call last):
2021-12-08T22:02:42.6750334Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1470, in wrapper
2021-12-08T22:02:42.6750925Z     method(*args, **kwargs)
2021-12-08T22:02:42.6751694Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T22:02:42.6752373Z     result = test(self, **param_kwargs)

See GitHub Actions build linux-bionic-cuda11.5-py3.6-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (7/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T22:02:42.8379754Z AssertionError: can only test a child process
2021-12-08T22:02:42.8305012Z     assert self._parent_pid == os.getpid(), 'can only test a child process'
2021-12-08T22:02:42.8306326Z AssertionError: can only test a child process
2021-12-08T22:02:42.8365610Z Exception ignored in: <bound method _MultiProcessingDataLoaderIter.__del__ of <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x7fe69f341b00>>
2021-12-08T22:02:42.8367107Z Traceback (most recent call last):
2021-12-08T22:02:42.8368251Z   File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
2021-12-08T22:02:42.8371637Z     self._shutdown_workers()
2021-12-08T22:02:42.8372751Z   File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1320, in _shutdown_workers
2021-12-08T22:02:42.8376985Z     if w.is_alive():
2021-12-08T22:02:42.8377747Z   File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 134, in is_alive
2021-12-08T22:02:42.8378981Z     assert self._parent_pid == os.getpid(), 'can only test a child process'
2021-12-08T22:02:42.8379754Z AssertionError: can only test a child process
2021-12-08T22:02:44.0238709Z ok (1.266s)
2021-12-08T22:02:44.0270263Z   test_iterabledataset_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s)
2021-12-08T22:02:46.4569320Z   test_large_sampler_indices (__main__.TestDataLoaderPersistentWorkers) ... ok (2.430s)
2021-12-08T22:02:46.4674758Z   test_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.011s)
2021-12-08T22:02:48.9640426Z   test_multiple_dataloaders (__main__.TestDataLoaderPersistentWorkers) ... ok (2.496s)
2021-12-08T22:02:52.4409692Z   test_multiprocessing_contexts (__main__.TestDataLoaderPersistentWorkers) ... [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2021-12-08T22:02:52.4411760Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2021-12-08T22:02:52.4468165Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2021-12-08T22:02:55.8912472Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2021-12-08T22:02:55.8915580Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

See GitHub Actions build Lint / quick-checks (8/13)

Step: "Ensure correct trailing newlines" (full log | diagnosis details | 🔁 rerun)

2021-12-08T20:19:15.7901230Z python: can't open..._launches.py': [Errno 2] No such file or directory
2021-12-08T20:19:15.7610530Z ##[group]Run set -eux
2021-12-08T20:19:15.7610978Z �[36;1mset -eux�[0m
2021-12-08T20:19:15.7611676Z �[36;1mpython torch/testing/_check_kernel_launches.py |& tee "${GITHUB_WORKSPACE}"/cuda_kernel_launch_checks.txt�[0m
2021-12-08T20:19:15.7647234Z shell: /bin/bash -e {0}
2021-12-08T20:19:15.7647590Z env:
2021-12-08T20:19:15.7648124Z   pythonLocation: /opt/hostedtoolcache/Python/3.10.0/x64
2021-12-08T20:19:15.7648852Z   LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.0/x64/lib
2021-12-08T20:19:15.7650318Z ##[endgroup]
2021-12-08T20:19:15.7730477Z + python torch/testing/_check_kernel_launches.py
2021-12-08T20:19:15.7734232Z + tee /home/runner/work/pytorch/pytorch/cuda_kernel_launch_checks.txt
2021-12-08T20:19:15.7901230Z python: can't open file '/home/runner/work/pytorch/pytorch/torch/testing/_check_kernel_launches.py': [Errno 2] No such file or directory
2021-12-08T20:19:15.7967012Z ##[group]Run (! git --no-pager grep -I -no $'#include <cub/' --  ./aten  ':(exclude)aten/src/ATen/cuda/cub*.cuh' || (echo "The above files have direct cub include; please include ATen/cuda/cub.cuh instead and wrap your cub calls in at::native namespace if necessary"; false))
2021-12-08T20:19:15.7968885Z �[36;1m(! git --no-pager grep -I -no $'#include <cub/' --  ./aten  ':(exclude)aten/src/ATen/cuda/cub*.cuh' || (echo "The above files have direct cub include; please include ATen/cuda/cub.cuh instead and wrap your cub calls in at::native namespace if necessary"; false))�[0m
2021-12-08T20:19:15.8005244Z shell: /bin/bash -e {0}
2021-12-08T20:19:15.8005598Z env:
2021-12-08T20:19:15.8006138Z   pythonLocation: /opt/hostedtoolcache/Python/3.10.0/x64
2021-12-08T20:19:15.8006842Z   LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.0/x64/lib
2021-12-08T20:19:15.8007368Z ##[endgroup]
2021-12-08T20:19:15.8327263Z ##[group]Run (! git --no-pager grep -I -no $'cudaStreamSynchronize' --  ./aten ./c10 ':(exclude)aten/src/ATen/test' ':(exclude)c10/cuda/CUDAFunctions.h' || (echo "The above files call raw cuda APIs directly; please use at::cuda wrappers instead"; false))
2021-12-08T20:19:15.8329224Z �[36;1m(! git --no-pager grep -I -no $'cudaStreamSynchronize' --  ./aten ./c10 ':(exclude)aten/src/ATen/test' ':(exclude)c10/cuda/CUDAFunctions.h' || (echo "The above files call raw cuda APIs directly; please use at::cuda wrappers instead"; false))�[0m
2021-12-08T20:19:15.8366639Z shell: /bin/bash -e {0}

See GitHub Actions build linux-xenial-py3.6-clang7-asan / test (default, 2, 2, linux.2xlarge) (9/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T23:07:34.9266765Z FAIL [0.011s]: tes...d_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T23:07:34.9261304Z     raise rte
2021-12-08T23:07:34.9262054Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T23:07:34.9262704Z     result = test(self, **param_kwargs)
2021-12-08T23:07:34.9263173Z   File "test_tensor_creation_ops.py", line 3327, in test_normal
2021-12-08T23:07:34.9263804Z     helper(self, device, dtype, lambda x: x, lambda t: t, lambda mean: mean)
2021-12-08T23:07:34.9264356Z   File "test_tensor_creation_ops.py", line 3267, in helper
2021-12-08T23:07:34.9264908Z     out = torch.normal(mean=torch.empty((0, 2)), std=torch.empty((0, 1)))
2021-12-08T23:07:34.9265541Z RuntimeError: inconsistent tensor, output size ([0, 2]) is not the same as input size ([0, 1])
2021-12-08T23:07:34.9265936Z 
2021-12-08T23:07:34.9266211Z ======================================================================
2021-12-08T23:07:34.9266765Z FAIL [0.011s]: test_normal_std_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T23:07:34.9267563Z ----------------------------------------------------------------------
2021-12-08T23:07:34.9268204Z RuntimeError: normal expects std >= 0.0, but found std -1
2021-12-08T23:07:34.9268505Z 
2021-12-08T23:07:34.9268946Z During handling of the above exception, another exception occurred:
2021-12-08T23:07:34.9269316Z 
2021-12-08T23:07:34.9269624Z Traceback (most recent call last):
2021-12-08T23:07:34.9270525Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T23:07:34.9271201Z     result = test(self, **param_kwargs)
2021-12-08T23:07:34.9271682Z   File "test_tensor_creation_ops.py", line 3336, in test_normal_std_error
2021-12-08T23:07:34.9272252Z     torch.normal(input, -1, (10,))

See GitHub Actions build linux-bionic-py3.6-clang9 / test (default, 2, 2, linux.2xlarge) (10/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T21:35:18.4147586Z FAIL [0.006s]: tes...d_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T21:35:18.4142079Z     raise rte
2021-12-08T21:35:18.4142851Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T21:35:18.4143531Z     result = test(self, **param_kwargs)
2021-12-08T21:35:18.4143994Z   File "test_tensor_creation_ops.py", line 3327, in test_normal
2021-12-08T21:35:18.4144564Z     helper(self, device, dtype, lambda x: x, lambda t: t, lambda mean: mean)
2021-12-08T21:35:18.4145107Z   File "test_tensor_creation_ops.py", line 3267, in helper
2021-12-08T21:35:18.4145663Z     out = torch.normal(mean=torch.empty((0, 2)), std=torch.empty((0, 1)))
2021-12-08T21:35:18.4146326Z RuntimeError: inconsistent tensor, output size ([0, 2]) is not the same as input size ([0, 1])
2021-12-08T21:35:18.4146730Z 
2021-12-08T21:35:18.4147008Z ======================================================================
2021-12-08T21:35:18.4147586Z FAIL [0.006s]: test_normal_std_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T21:35:18.4148355Z ----------------------------------------------------------------------
2021-12-08T21:35:18.4148989Z RuntimeError: normal expects std >= 0.0, but found std -1
2021-12-08T21:35:18.4149310Z 
2021-12-08T21:35:18.4149747Z During handling of the above exception, another exception occurred:
2021-12-08T21:35:18.4150125Z 
2021-12-08T21:35:18.4150443Z Traceback (most recent call last):
2021-12-08T21:35:18.4151260Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T21:35:18.4151936Z     result = test(self, **param_kwargs)
2021-12-08T21:35:18.4152422Z   File "test_tensor_creation_ops.py", line 3336, in test_normal_std_error
2021-12-08T21:35:18.4152983Z     torch.normal(input, -1, (10,))

See GitHub Actions build linux-xenial-py3.6-gcc7 / test (default, 2, 2, linux.2xlarge) (11/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T21:31:18.4469982Z FAIL [0.006s]: tes...d_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T21:31:18.4464566Z     raise rte
2021-12-08T21:31:18.4465318Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T21:31:18.4465969Z     result = test(self, **param_kwargs)
2021-12-08T21:31:18.4466437Z   File "test_tensor_creation_ops.py", line 3327, in test_normal
2021-12-08T21:31:18.4466980Z     helper(self, device, dtype, lambda x: x, lambda t: t, lambda mean: mean)
2021-12-08T21:31:18.4467530Z   File "test_tensor_creation_ops.py", line 3267, in helper
2021-12-08T21:31:18.4468076Z     out = torch.normal(mean=torch.empty((0, 2)), std=torch.empty((0, 1)))
2021-12-08T21:31:18.4468711Z RuntimeError: inconsistent tensor, output size ([0, 2]) is not the same as input size ([0, 1])
2021-12-08T21:31:18.4469116Z 
2021-12-08T21:31:18.4469397Z ======================================================================
2021-12-08T21:31:18.4469982Z FAIL [0.006s]: test_normal_std_error_cpu (__main__.TestRandomTensorCreationCPU)
2021-12-08T21:31:18.4470751Z ----------------------------------------------------------------------
2021-12-08T21:31:18.4471356Z RuntimeError: normal expects std >= 0.0, but found std -1
2021-12-08T21:31:18.4471670Z 
2021-12-08T21:31:18.4472114Z During handling of the above exception, another exception occurred:
2021-12-08T21:31:18.4472484Z 
2021-12-08T21:31:18.4472792Z Traceback (most recent call last):
2021-12-08T21:31:18.4473612Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
2021-12-08T21:31:18.4474260Z     result = test(self, **param_kwargs)
2021-12-08T21:31:18.4474753Z   File "test_tensor_creation_ops.py", line 3336, in test_normal_std_error
2021-12-08T21:31:18.4475313Z     torch.normal(input, -1, (10,))

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 1, 2, linux.4xlarge.nvidia.gpu) (12/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T22:24:52.5469594Z RuntimeError: test_torch failed!
2021-12-08T22:24:51.9924035Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCUDA-20211208222317.xml
2021-12-08T22:24:51.9926904Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCUDA-20211208222317.xml
2021-12-08T22:24:52.3973936Z [TORCH_VITAL] Dataloader.enabled		 True
2021-12-08T22:24:52.3974838Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2021-12-08T22:24:52.3975484Z [TORCH_VITAL] CUDA.used		 true
2021-12-08T22:24:52.5461848Z Traceback (most recent call last):
2021-12-08T22:24:52.5462550Z   File "test/run_test.py", line 1058, in <module>
2021-12-08T22:24:52.5465491Z     main()
2021-12-08T22:24:52.5466007Z   File "test/run_test.py", line 1036, in main
2021-12-08T22:24:52.5468980Z     raise RuntimeError(err_message)
2021-12-08T22:24:52.5469594Z RuntimeError: test_torch failed!
2021-12-08T22:24:53.0224763Z + cleanup
2021-12-08T22:24:53.0225286Z + retcode=1
2021-12-08T22:24:53.0225655Z + set +x
2021-12-08T22:24:53.0276019Z ##[error]Process completed with exit code 1.
2021-12-08T22:24:53.0343377Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2021-12-08T22:24:53.0344427Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2021-12-08T22:24:53.0345233Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2021-12-08T22:24:53.0358458Z shell: /usr/bin/bash -e {0}
2021-12-08T22:24:53.0358867Z env:
2021-12-08T22:24:53.0359563Z   BUILD_ENVIRONMENT: linux-xenial-cuda11.3-py3.6-gcc7

See GitHub Actions build win-vs2019-cpu-py3 / test (default, 2, 2, windows.4xlarge) (13/13)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-08T22:45:04.4829896Z RuntimeError: test_torch failed!
2021-12-08T22:45:04.2445263Z Generated XML report: test-reports\dist-gloo\test_torch\TEST-TestTorchDeviceTypeCPU-20211208224451.xml
2021-12-08T22:45:04.2446531Z Generated XML report: test-reports\dist-gloo\test_torch\TEST-TestVitalSignsCudaCPU-20211208224451.xml
2021-12-08T22:45:04.4597603Z [TORCH_VITAL] CUDA.used		 False
2021-12-08T22:45:04.4598068Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2021-12-08T22:45:04.4598596Z [TORCH_VITAL] Dataloader.enabled		 True
2021-12-08T22:45:04.4827459Z Traceback (most recent call last):
2021-12-08T22:45:04.4828326Z   File "run_test.py", line 1058, in <module>
2021-12-08T22:45:04.4828668Z     main()
2021-12-08T22:45:04.4829076Z   File "run_test.py", line 1036, in main
2021-12-08T22:45:04.4829480Z     raise RuntimeError(err_message)
2021-12-08T22:45:04.4829896Z RuntimeError: test_torch failed!
2021-12-08T22:45:04.7231797Z 
2021-12-08T22:45:04.7232536Z (base) C:\actions-runner\_work\pytorch\pytorch\test>popd
2021-12-08T22:45:04.7236565Z 
2021-12-08T22:45:04.7237012Z (base) C:\actions-runner\_work\pytorch\pytorch>if ERRORLEVEL 1 exit /b 1 
2021-12-08T22:45:04.7274310Z + cleanup
2021-12-08T22:45:04.7274668Z + retcode=1
2021-12-08T22:45:04.7274932Z + set +x
2021-12-08T22:45:04.7494913Z ##[error]Process completed with exit code 1.
2021-12-08T22:45:04.7795534Z ##[group]Run # -ir => recursive include all files in pattern
2021-12-08T22:45:04.7796194Z �[36;1m# -ir => recursive include all files in pattern�[0m

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@nkaretnikov
Copy link
Collaborator Author

to avoid confusion, will open a new stack to address issues related to BC and broadcasting

@facebook-github-bot facebook-github-bot deleted the gh/nkaretnikov/3/head branch January 15, 2022 15:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants