Skip to content

DISABLED test_caching_pinned_memory_multi_gpu (__main__.TestCuda) #70875

@KyleCZH

Description

@KyleCZH

Platforms: rocm

https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-linux-bionic-rocm4.5-py3.7-test2/19//console

13:43:03 ======================================================================
13:43:03 FAIL [0.031s]: test_caching_pinned_memory_multi_gpu (__main__.TestCuda)
13:43:03 ----------------------------------------------------------------------
13:43:03 Traceback (most recent call last):
13:43:03   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1566, in wrapper
13:43:03     method(*args, **kwargs)
13:43:03   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1566, in wrapper
13:43:03     method(*args, **kwargs)
13:43:03   File "test_cuda.py", line 1394, in test_caching_pinned_memory_multi_gpu
13:43:03     self.assertNotEqual(t.data_ptr(), ptr, msg='allocation re-used too soon')
13:43:03   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2177, in assertNotEqual
13:43:03     self.assertEqual(x, y, msg, atol=atol, rtol=rtol, **kwargs)
13:43:03 AssertionError: AssertionError not raised : allocation re-used too soon
13:43:03 
13:43:03 ----------------------------------------------------------------------

cc @ngimel @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: cudaRelated to torch.cuda, and CUDA support in generalmodule: rocmAMD GPU support for PytorchskippedDenotes a (flaky) test currently skipped in CI.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions