-
Notifications
You must be signed in to change notification settings - Fork 706
Description
Thanks for your bug report. We appreciate it a lot.
Checklist
- I have searched related issues but cannot get the expected help.
- I have read the FAQ documentation but cannot get the expected help.
- The bug has not been fixed in the latest version.
Describe the bug
A clear and concise description of what the bug is.
Reproduction
- What command or script did you run?
A placeholder for the command.
- Did you make any modifications on the code or config? Did you understand what you have modified?
Environment
- Please run
python tools/check_env.pyto collect necessary environment information and paste it here.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
2022-07-06 10:00:02,645 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
2022-07-06 10:00:23,286 - mmdeploy - INFO - Start pipeline mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt in subprocess
2022-07-06 10:00:24,096 - mmdeploy - INFO - Successfully loaded tensorrt plugins from d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\lib\mmdeploy_tensorrt_ops.dll
[07/06/2022-10:00:29] [TRT] [I] [MemUsageChange] Init CUDA: CPU +525, GPU +0, now: CPU 4637, GPU 1273 (MiB)
[07/06/2022-10:00:32] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 4683 MiB, GPU 1273 MiB
[07/06/2022-10:00:32] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 4866 MiB, GPU 1317 MiB
[07/06/2022-10:00:33] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/06/2022-10:00:33] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[07/06/2022-10:00:34] [TRT] [E] If_371_OutputLayer: IIfConditionalOutputLayer inputs must have the same shape.
Process Process-3:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\mmDetecttionlab\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\ProgramData\Anaconda3\envs\mmDetecttionlab\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\apis\core\pipeline_manager.py", line 105, in call
ret = func(*args, **kwargs)
File "d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\backend\tensorrt\onnx2tensorrt.py", line 79, in onnx2tensorrt
from_onnx(
File "d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\backend\tensorrt\utils.py", line 113, in from_onnx
raise RuntimeError(f'Failed to parse onnx, {error_msgs}')
RuntimeError: Failed to parse onnx, In node 371 (parseGraph): INVALID_NODE: Invalid Node - If_371
If_371_OutputLayer: IIfConditionalOutputLayer inputs must have the same shape.
Traceback (most recent call last):
File "./tools/deploy.py", line 364, in
main()
File "./tools/deploy.py", line 208, in main
onnx2tensorrt(
File "d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\apis\core\pipeline_manager.py", line 354, in wrap
return self.call_function(func_name, *args, **kwargs)
File "d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\apis\core\pipeline_manager.py", line 322, in call_function
return self.get_result_sync(call_id)
File "d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\apis\core\pipeline_manager.py", line 303, in get_result_sync
ret = self.get_caller(func_name).pop_mp_output(call_id)
File "d:\workspace\aiwork\ai-desktop\mmdeploy\mmdeploy\apis\core\pipeline_manager.py", line 79, in pop_mp_output
assert call_id in self._mp_dict,
AssertionError: mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt with Call id: 1 failed.