🐛 Bug
When I set up pinned_memory and persistent_workers to True and then start iterating the data loader the second time, it crashes:
Traceback (most recent call last):
File "C:\Users\denis\anaconda3\envs\fcdd\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\denis\anaconda3\envs\fcdd\lib\site-packages\torch\utils\data_utils\pin_memory.py", line 28, in _pin_memory_loop
idx, data = r
ValueError: not enough values to unpack (expected 2, got 0)
python-BaseException
To Reproduce
Steps to reproduce the behavior:
from torchvision.datasets import MNIST
from torchvision.transforms import Compose, ToTensor, Normalize
from torch.utils.data import DataLoader
ds = MNIST('test',
download=True,
transform=Compose([
ToTensor(),
Normalize(
(0.1307,), (0.3081,))
]))
dl = DataLoader(ds,
batch_size=4,
shuffle=True,
num_workers=4,
pin_memory=True,
persistent_workers=True)
print(next(iter(dl)))
print(next(iter(dl)))
Expected behavior
The data loader will start new iteration process.
Environment
Collecting environment information...
PyTorch version: 1.7.0
Is debug build: True
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: (Rev5, Built by MSYS2 project) 5.3.0
Clang version: Could not collect
CMake version: version 3.17.2
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce 940MX
Nvidia driver version: 445.75
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\cudnn64_7.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.7.0
[pip3] torchaudio==0.7.0
[pip3] torchvision==0.8.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 h74a9793_1
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38hb782905_0
[conda] mkl_fft 1.2.0 py38h45dec08_0
[conda] mkl_random 1.1.1 py38h47e9c7a_0
[conda] numpy 1.19.2 py38hadc3359_0
[conda] numpy-base 1.19.2 py38ha3acd2a_0
[conda] pytorch 1.7.0 py3.8_cuda102_cudnn7_0 pytorch
[conda] torchaudio 0.7.0 py38 pytorch
[conda] torchvision 0.8.1 py38_cu102 pytorch
cc @ssnl @VitalyFedyunin @ejguan @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @mszhanyi @skyline75489
🐛 Bug
When I set up pinned_memory and persistent_workers to True and then start iterating the data loader the second time, it crashes:
Traceback (most recent call last):
File "C:\Users\denis\anaconda3\envs\fcdd\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\denis\anaconda3\envs\fcdd\lib\site-packages\torch\utils\data_utils\pin_memory.py", line 28, in _pin_memory_loop
idx, data = r
ValueError: not enough values to unpack (expected 2, got 0)
python-BaseException
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The data loader will start new iteration process.
Environment
Collecting environment information...
PyTorch version: 1.7.0
Is debug build: True
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: (Rev5, Built by MSYS2 project) 5.3.0
Clang version: Could not collect
CMake version: version 3.17.2
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce 940MX
Nvidia driver version: 445.75
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\cudnn64_7.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.7.0
[pip3] torchaudio==0.7.0
[pip3] torchvision==0.8.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 h74a9793_1
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38hb782905_0
[conda] mkl_fft 1.2.0 py38h45dec08_0
[conda] mkl_random 1.1.1 py38h47e9c7a_0
[conda] numpy 1.19.2 py38hadc3359_0
[conda] numpy-base 1.19.2 py38ha3acd2a_0
[conda] pytorch 1.7.0 py3.8_cuda102_cudnn7_0 pytorch
[conda] torchaudio 0.7.0 py38 pytorch
[conda] torchvision 0.8.1 py38_cu102 pytorch
cc @ssnl @VitalyFedyunin @ejguan @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @mszhanyi @skyline75489