python : Error in cpuinfo: operating system is not supported in cpuinfo
At line:1 char:1
+ python test\test_torch.py *> _test_results.log
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (Error in cpuinf...rted in cpuinfo:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
...sssssssssssssssss.............C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since 
it had shape [1, 23, 12], which does not match the required output shape [1, 23, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be 
resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at 
 C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which 
does not match the required output shape [1, 0, 12].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which 
does not match the required output shape [1, 0, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which 
does not match the required output shape [0, 23, 12].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which 
does not match the required output shape [0, 23, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which 
does not match the required output shape [0, 0, 12].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which 
does not match the required output shape [0, 0, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which 
does not match the required output shape [10, 23, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which 
does not match the required output shape [10, 0, 12].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which 
does not match the required output shape [10, 0, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which 
does not match the required output shape [0, 23, 12].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which 
does not match the required output shape [0, 23, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which 
does not match the required output shape [0, 0, 12].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:8178: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which 
does not match the required output shape [0, 0, 0].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero 
elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.bmm(b1, b2, out=res2)
[TORCH_VITAL] CUDA.used		 False
[TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
[TORCH_VITAL] CUDA.used		 False
[TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
[TORCH_VITAL] CUDA.used		 False
[TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
[TORCH_VITAL] Dataloader.enabled		 True
..s..............[W C:\gaborkertesz\repos\pytorch_win\pytorch\torch\csrc\autograd\python_variable.cpp:96] Warning: Deallocating Tensor that still has live PyObject 
references.  This probably happened because you took out a weak reference to Tensor and didn't call _fix_weakref() after dereferencing it.  Subsequent accesses to 
this tensor via the PyObject will now fail. (function concrete_decref_fn)
..............................................s............C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:6259: DeprecationWarning: `np.float` is a 
deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically 
wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  self.assertRaises(TypeError, lambda: torch.ones((np.float(3.), torch.tensor(4))))
...................s....s.........................................................................ss........................sssssss...s...........s......C:\gaborkertes
z\repos\pytorch_win\pytorch\test\test_torch.py:1739: UserWarning: cov(): degrees of freedom is <= 0 (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Correlation.cpp:99.)
  res = torch.corrcoef(x)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:380: RuntimeWarning: Mean of empty slice.
  avg = a.mean(axis)
C:\gaborkertesz\venv\lib\site-packages\numpy\core\_methods.py:181: RuntimeWarning: invalid value encountered in true_divide
  ret = um.true_divide(
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:2683: RuntimeWarning: Degrees of freedom <= 0 for slice
  c = cov(x, y, rowvar, dtype=dtype)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:2542: RuntimeWarning: divide by zero encountered in true_divide
  c *= np.true_divide(1, fact)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:2542: RuntimeWarning: invalid value encountered in multiply
  c *= np.true_divide(1, fact)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:2689: RuntimeWarning: invalid value encountered in true_divide
  return c / c
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:2691: RuntimeWarning: invalid value encountered in true_divide
  c /= stddev[:, None]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:2692: RuntimeWarning: invalid value encountered in true_divide
  c /= stddev[None, :]
F..C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:1746: UserWarning: cov(): degrees of freedom is <= 0 (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Correlation.cpp:99.)
  res = torch.cov(t, correction=correction, fweights=fweights, aweights=aweights)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:1750: RuntimeWarning: Degrees of freedom <= 0 for slice
  ref = np.cov(t, ddof=correction, fweights=fweights, aweights=aweights)
F....s.........s....................................sssssssssssssssssssssss.....................................ssssss..C:\gaborkertesz\repos\pytorch_win\pytorch\test\
test_torch.py:4090: UserWarning: An output with one or more elements was resized since it had shape [6], which does not match the required output shape [3].This 
behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by 
resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.gather(src, 0, ind, out=src)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:4092: UserWarning: An output with one or more elements was resized since it had shape [1], which does not 
match the required output shape [2].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can 
explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.gather(ind.clone(), 0, ind[1:], out=ind[:1])
..........sssssssss..C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1084: RuntimeWarning: divide by zero encountered in true_divide
  a = -(dx2)/(dx1 * (dx1 + dx2))
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1085: RuntimeWarning: divide by zero encountered in true_divide
  b = (dx2 - dx1) / (dx1 * dx2)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1086: RuntimeWarning: divide by zero encountered in true_divide
  c = dx1 / (dx2 * (dx1 + dx2))
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1092: RuntimeWarning: invalid value encountered in add
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1140: RuntimeWarning: divide by zero encountered in double_scalars
  a = (dx2) / (dx1 * (dx1 + dx2))
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1141: RuntimeWarning: divide by zero encountered in double_scalars
  b = - (dx2 + dx1) / (dx1 * dx2)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1144: RuntimeWarning: invalid value encountered in double_scalars
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1123: RuntimeWarning: divide by zero encountered in double_scalars
  a = -(2. * dx1 + dx2)/(dx1 * (dx1 + dx2))
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1124: RuntimeWarning: divide by zero encountered in double_scalars
  b = (dx1 + dx2) / (dx1 * dx2)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1127: RuntimeWarning: invalid value encountered in add
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1144: RuntimeWarning: invalid value encountered in add
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1101: RuntimeWarning: divide by zero encountered in true_divide
  out[tuple(slice1)] = (f[tuple(slice2)] - f[tuple(slice3)]) / dx_0
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1101: RuntimeWarning: invalid value encountered in true_divide
  out[tuple(slice1)] = (f[tuple(slice2)] - f[tuple(slice3)]) / dx_0
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1092: RuntimeWarning: invalid value encountered in multiply
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1125: RuntimeWarning: divide by zero encountered in double_scalars
  c = - dx1 / (dx2 * (dx1 + dx2))
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1127: RuntimeWarning: invalid value encountered in multiply
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1144: RuntimeWarning: invalid value encountered in multiply
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1142: RuntimeWarning: divide by zero encountered in double_scalars
  c = (2. * dx2 + dx1) / (dx2 * (dx1 + dx2))
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1108: RuntimeWarning: divide by zero encountered in true_divide
  out[tuple(slice1)] = (f[tuple(slice2)] - f[tuple(slice3)]) / dx_n
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1108: RuntimeWarning: invalid value encountered in true_divide
  out[tuple(slice1)] = (f[tuple(slice2)] - f[tuple(slice3)]) / dx_n
....C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1080: ComplexWarning: Casting complex values to real discards the imaginary part
  out[tuple(slice1)] = (f[tuple(slice4)] - f[tuple(slice2)]) / (2. * ax_dx)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1101: ComplexWarning: Casting complex values to real discards the imaginary part
  out[tuple(slice1)] = (f[tuple(slice2)] - f[tuple(slice3)]) / dx_0
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1108: ComplexWarning: Casting complex values to real discards the imaginary part
  out[tuple(slice1)] = (f[tuple(slice2)] - f[tuple(slice3)]) / dx_n
C:\gaborkertesz\venv\lib\site-packages\torch\testing\_comparison.py:590: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable 
tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting 
it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\torch\csrc\utils\tensor_numpy.cpp:178.)
  return torch.as_tensor(tensor_like)
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1127: ComplexWarning: Casting complex values to real discards the imaginary part
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1144: ComplexWarning: Casting complex values to real discards the imaginary part
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
C:\gaborkertesz\venv\lib\site-packages\numpy\lib\function_base.py:1092: ComplexWarning: Casting complex values to real discards the imaginary part
  out[tuple(slice1)] = a * f[tuple(slice2)] + b * f[tuple(slice3)] + c * f[tuple(slice4)]
............................................................ss.....ssss.......................................s.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_tor
ch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. 
(Triggered internally at  C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:3442: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now 
deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\TensorAdvancedIndexing.cpp:1480.)
  torch.masked_select(src, mask, out=dst3)
.....s...........ssss..s..s..........................C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:1368: UserWarning: An output with one or more 
elements was resized since it had shape [10], which does not match the required output shape [1].This behavior is deprecated, and in a future PyTorch release outputs 
will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered 
internally at  C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.kthvalue(a, k, out=(values, indices))
.C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:1459: UserWarning: An output with one or more elements was resized since it had shape [10], which does 
not match the required output shape [].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You 
can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.median(a, 0, out=(result, indices))
....sss...ss.C:\gaborkertesz\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py:115: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available.  
Disabling.
  warnings.warn("torch.cuda.amp.GradScaler is enabled, but CUDA is not available.  Disabling.")
.s.........................C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:473: UserWarning: torch.lstsq is deprecated in favor of torch.linalg.lstsq and 
will be removed in a future PyTorch release.
torch.linalg.lstsq has reversed arguments and does not return the QR decomposition in the returned tuple (although it returns other information about the problem).
To get the qr decomposition consider using torch.linalg.qr.
The returned solution in torch.lstsq stored the residuals of the solution in the last m - n columns of the returned value whenever m > n. In torch.linalg.lstsq, the 
residuals in the field 'residuals' of the returned named tuple.
The unpacking of the solution, as in
X, _ = torch.lstsq(B, A).solution[:A.size(1)]
should be replaced with
X = torch.linalg.lstsq(A, B).solution (Triggered internally at  C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\BatchLinearAlgebra.cpp:3648.)
  self.assertRaises(RuntimeError, lambda: torch.lstsq(zero_d, zero_d))
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:476: UserWarning: torch.eig is deprecated in favor of torch.linalg.eig and will be removed in a future 
PyTorch release.
torch.linalg.eig returns complex tensors of dtype cfloat or cdouble rather than real tensors mimicking complex tensors.
L, _ = torch.eig(A)
should be replaced with
L_complex = torch.linalg.eigvals(A)
and
L, V = torch.eig(A, eigenvectors=True)
should be replaced with
L_complex, V_complex = torch.linalg.eig(A) (Triggered internally at  C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\BatchLinearAlgebra.cpp:2909.)
  self.assertRaises(RuntimeError, lambda: torch.eig(zero_d, False))
.......sssssss......................................s..C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:266: UserWarning: Casting complex values to real 
discards the imaginary part (Triggered internally at  C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Copy.cpp:239.)
  error_storage = a.to(error_dtype).storage()
............s.........................s..........................s....ssss.s.......................ssss.....sssssssss.ss.ssssssssssssssssssssssssssssssssss..sssss.ssss
ss.sss.ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss....s.ssssssssssssC:\gaborkertesz\repos\pytorch_win\pytorch\test
\test_torch.py:4090: UserWarning: An output with one or more elements was resized since it had shape [6], which does not match the required output shape [3].This 
behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by 
resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.gather(src, 0, ind, out=src)
C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py:4092: UserWarning: An output with one or more elements was resized since it had shape [1], which does not 
match the required output shape [2].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can 
explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  
C:\gaborkertesz\repos\pytorch_win\pytorch\aten\src\ATen\native\Resize.cpp:24.)
  torch.gather(ind.clone(), 0, ind[1:], out=ind[:1])
.sssssssss.........sssssssss.s.ssssssssssssssssssssssss.ssssssssssss.sssssssssssssss.ss....ssssss.........................ss.s.sssssssssssssssssssssssss.ss.s..ssssssss
ssssssss.ss..ss.sss.s..sss.sss....ss...sss.ssss.C:\gaborkertesz\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py:115: UserWarning: torch.cuda.amp.GradScaler is 
enabled, but CUDA is not available.  Disabling.
  warnings.warn("torch.cuda.amp.GradScaler is enabled, but CUDA is not available.  Disabling.")
.sssssssssssss.ssssssssssssssssss.ssssssssssssssssssss.sssssssssssss..sssssssssss...............sssssssssssssssssssssssssssss.ssssssssssssssssssss.ss..s..s..ss
======================================================================
FAIL: test_corrcoef_cpu_complex64 (__main__.TestTorchDeviceTypeCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\gaborkertesz\venv\lib\site-packages\torch\testing\_internal\common_device_type.py", line 376, in instantiated_test
    result = test(self, **param_kwargs)
  File "C:\gaborkertesz\venv\lib\site-packages\torch\testing\_internal\common_device_type.py", line 943, in only_fn
    return fn(self, *args, **kwargs)
  File "C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py", line 1741, in test_corrcoef
    self.assertEqual(res, ref, exact_dtype=False)
  File "C:\gaborkertesz\venv\lib\site-packages\torch\testing\_internal\common_utils.py", line 2106, in assertEqual
    assert_equal(
  File "C:\gaborkertesz\venv\lib\site-packages\torch\testing\_comparison.py", line 1081, in assert_equal
    raise error_metas[0].to_error()
AssertionError: Tensor-likes are not close!

Mismatched elements: 4 / 4 (100.0%)
Greatest absolute difference: nan at index (0, 1) (up to 1e-05 allowed)
Greatest relative difference: nan at index (0, 1) (up to 1.3e-06 allowed)


======================================================================
FAIL: test_cov_cpu_complex64 (__main__.TestTorchDeviceTypeCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\gaborkertesz\venv\lib\site-packages\torch\testing\_internal\common_device_type.py", line 376, in instantiated_test
    result = test(self, **param_kwargs)
  File "C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py", line 1754, in test_cov
    check(x)
  File "C:\gaborkertesz\repos\pytorch_win\pytorch\test\test_torch.py", line 1751, in check
    self.assertEqual(res, ref, atol=1e-05, rtol=1e-05, exact_dtype=False)
  File "C:\gaborkertesz\venv\lib\site-packages\torch\testing\_internal\common_utils.py", line 2106, in assertEqual
    assert_equal(
  File "C:\gaborkertesz\venv\lib\site-packages\torch\testing\_comparison.py", line 1081, in assert_equal
    raise error_metas[0].to_error()
AssertionError: Tensor-likes are not close!

Mismatched elements: 4 / 4 (100.0%)
Greatest absolute difference: 83.38657060851698 at index (1, 1) (up to 1e-05 allowed)
Greatest relative difference: 1.6077072050069752 at index (0, 0) (up to 1e-05 allowed)


----------------------------------------------------------------------
Ran 1344 tests in 21.851s

FAILED (failures=2, skipped=558)
[TORCH_VITAL] CUDA.used		 False
[TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
[TORCH_VITAL] Dataloader.enabled		 True
