13:43:03 ======================================================================
13:43:03 FAIL [0.031s]: test_caching_pinned_memory_multi_gpu (__main__.TestCuda)
13:43:03 ----------------------------------------------------------------------
13:43:03 Traceback (most recent call last):
13:43:03 File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1566, in wrapper
13:43:03 method(*args, **kwargs)
13:43:03 File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1566, in wrapper
13:43:03 method(*args, **kwargs)
13:43:03 File "test_cuda.py", line 1394, in test_caching_pinned_memory_multi_gpu
13:43:03 self.assertNotEqual(t.data_ptr(), ptr, msg='allocation re-used too soon')
13:43:03 File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2177, in assertNotEqual
13:43:03 self.assertEqual(x, y, msg, atol=atol, rtol=rtol, **kwargs)
13:43:03 AssertionError: AssertionError not raised : allocation re-used too soon
13:43:03
13:43:03 ----------------------------------------------------------------------
Platforms: rocm
https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-linux-bionic-rocm4.5-py3.7-test2/19//console
cc @ngimel @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH