HolyWu
HolyWu
Are you using the 32-bit toolchain or the 64-bit? I tried your sample and found that it crashes only with the 32-bit, but not the 64-bit.
There is something wrong with your boost library installation, because those files/folders are indeed there. If in doubt, download from https://www.boost.org/users/download/ and check again.
I am afraid that I don't know how to fix that if the issue already existed in https://github.com/lltcggie/waifu2x-caffe since this filter is simply a wrapper over the upstream library.
Use other professional tools like GPU-Z rather than task manager to see **_true_** GPU loading.
The Depan filter in MVTools plugin does not support `range` parameter. I guess it's affected by that. You can try https://github.com/theChaosCoder/lostfunc/blob/0a6816844b8f08ecac0e7a5bb32aacae2f4a3b93/lostfunc.py#L58 which supports `range` parameter, but I don't know where...
> The following packages have also been ported and updated in the these *funcs (that I know of): > > It may be worth it to consider either updating them...
Actually `cudnn_adv_train64_8.dll` exists in that directory and isn't missing. The reason is that `tensorrt` and `torch` load their own cuDNN DLLs and the version conflicts. Until the PyTorch devs upgrade...
Relevant issue upstream: https://github.com/pytorch/pytorch/issues/116684
The same for [aten.leaky_relu](https://github.com/pytorch/TensorRT/blob/9a100b6414bee175040bcaa275ecb71df54836e4/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L447-L462). ```py import torch import torch.nn as nn import torch_tensorrt class MyModule(nn.Module): def __init__(self): super().__init__() self.m = nn.LeakyReLU() def forward(self, x): return self.m(x) model = MyModule().eval().cuda().half() inputs...
[aten.upsample_bilinear2d](https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L2515-L2533) ```py import torch import torch.nn as nn import torch.nn.functional as F import torch_tensorrt class MyModule(nn.Module): def __init__(self): super().__init__() def forward(self, x): return F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) model = MyModule().eval().cuda().half()...