-
Notifications
You must be signed in to change notification settings - Fork 384
Description
I am running TRTorch V0.0.2 and trying to compile the resnet18 version of the model presented here https://github.com/NVIDIA-AI-IOT/trt_pose in C++. When I try to compile the graph in C++ I get the following error (note that if I try to compile in python there is no error and the kernel dies):
ERROR: [TRTorch Conversion Context] - %input.8 : Tensor = aten::_convolution(%218, %self.1.cmap_up.0.weight, %self.1.cmap_up.0.bias, %7, %5, %5, %11, %4, %8, %10, %10, %11) # /home/michael/anaconda3/envs/nvidia_dl/lib/python3.7/site-packages/torch/nn/modules/conv.py:790:0: kernel weights has count 2097152 but 4194304 was expected
ERROR: [TRTorch Conversion Context] - %input.8 : Tensor = aten::_convolution(%218, %self.1.cmap_up.0.weight, %self.1.cmap_up.0.bias, %7, %5, %5, %11, %4, %8, %10, %10, %11) # /home/michael/anaconda3/envs/nvidia_dl/lib/python3.7/site-packages/torch/nn/modules/conv.py:790:0: count of 2097152 weights in kernel, but kernel dimensions (4,4) with 512 input channels, 512 output channels and 1 groups were specified. Expected Weights count is 512 * 4*4 * 512 / 1 = 4194304
ERROR: [TRTorch Conversion Context] - %input.8 : Tensor = aten::_convolution(%218, %self.1.cmap_up.0.weight, %self.1.cmap_up.0.bias, %7, %5, %5, %11, %4, %8, %10, %10, %11) # /home/michael/anaconda3/envs/nvidia_dl/lib/python3.7/site-packages/torch/nn/modules/conv.py:790:0: kernel weights has count 2097152 but 4194304 was expected
ERROR: [TRTorch Conversion Context] - %input.8 : Tensor = aten::_convolution(%218, %self.1.cmap_up.0.weight, %self.1.cmap_up.0.bias, %7, %5, %5, %11, %4, %8, %10, %10, %11) # /home/michael/anaconda3/envs/nvidia_dl/lib/python3.7/site-packages/torch/nn/modules/conv.py:790:0: count of 2097152 weights in kernel, but kernel dimensions (4,4) with 512 input channels, 512 output channels and 1 groups were specified. Expected Weights count is 512 * 4*4 * 512 / 1 = 4194304
If I add in the line trtorch::CheckMethodOperatorSupport(script_model, "forward"); just before compiling I get the error
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:240] Expected schema to be true but got false
Unable to get schema for Node %245 : (Tensor, Tensor) = prim::TupleConstruct(%242, %244) (conversion.VerifyCoverterSupportForBlock
So I think this is expected behavior for a converter not being found but I am not entirely sure. If it is a converter missing it looks like it is related to handling the TupleConstruct since the model returns a tuple. Another model that I have tested and that just returns a tensor works fine. Are there any plans to get support for this implemented and is it as straight forward as adding another converter?