-
Notifications
You must be signed in to change notification settings - Fork 706
Description
Thanks for your bug report. We appreciate it a lot.
Checklist
- I have searched related issues but cannot get the expected help.
- I have read the FAQ documentation but cannot get the expected help.
- The bug has not been fixed in the latest version.
Describe the bug
使用测试脚本检测coco bbox时显示TypeError: show_result() got an unexpected keyword argument 'bbox_color'
Reproduction
(mm) ubuntu@y9000p:/work/COCO/mmdeploy$ python tools/deploy.py configs/mmdet/detection/detection_tensorrt_dynamic-416x416-864x864.py ../mmdetection/configs/yolox/yolox_s_8x8_300e_coco.py ../checkpoints/yolox_s_8x8_300e_coco_20211121_095711-4592a793.pth demo/demo.jpg --work-dir work_dir_yolox-s_fp32 --device cuda:0 --dump-info/work/COCO/mmdeploy$ python tools/test.py configs/mmdet/detection/detection_tensorrt_dynamic-416x416-864x864.py ../mmdetection/configs/yolox/yolox_s_8x8_300e_coco.py --model work_dir_yolox-s_fp32/end2end.engine --metrics bbox --device cuda:0 --show
2022-02-23 17:12:34,522 - mmdeploy - INFO - torch2onnx start.
load checkpoint from local path: ../checkpoints/yolox_s_8x8_300e_coco_20211121_095711-4592a793.pth
/home/ubuntu/miniconda3/envs/mm/lib/python3.8/site-packages/torch/onnx/utils.py:97: UserWarning: strip_doc_string' is deprecated and ignored. Will be removed in next PyTorch release. It's combined with verbose' argument now.
warnings.warn("`strip_doc_string' is deprecated and ignored. Will be removed in "
/home/ubuntu/work/COCO/mmdeploy/mmdeploy/core/optimizers/function_marker.py:158: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
ys_shape = tuple(int(s) for s in ys.shape)
/home/ubuntu/miniconda3/envs/mm/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/ubuntu/work/COCO/mmdeploy/mmdeploy/codebase/mmdet/core/post_processing/bbox_nms.py:260: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
dets, labels = TRTBatchedNMSop.apply(boxes, scores, int(scores.shape[-1]),
/home/ubuntu/work/COCO/mmdeploy/mmdeploy/mmcv/ops/nms.py:177: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
out_boxes = min(num_boxes, after_topk)
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
2022-02-23 17:12:44,897 - mmdeploy - INFO - torch2onnx success.
2022-02-23 17:12:45,012 - mmdeploy - INFO - onnx2tensorrt of work_dir_yolox-s_fp32/end2end.onnx start.
2022-02-23 17:12:45,993 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/ubuntu/work/COCO/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
[02/23/2022-17:12:46] [TRT] [I] [MemUsageChange] Init CUDA: CPU +357, GPU +0, now: CPU 436, GPU 1910 (MiB)
[02/23/2022-17:12:46] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 456 MiB, GPU 1910 MiB
[02/23/2022-17:12:47] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 831 MiB, GPU 2032 MiB
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:365: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [W] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped
[02/23/2022-17:12:47] [TRT] [I] No importer registered for op: TRTBatchedNMS. Attempting to import as plugin.
[02/23/2022-17:12:47] [TRT] [I] Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace:
[02/23/2022-17:12:47] [TRT] [I] Successfully created plugin: TRTBatchedNMS
[02/23/2022-17:12:50] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.6.5
[02/23/2022-17:12:50] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +773, GPU +342, now: CPU 5713, GPU 4252 (MiB)
[02/23/2022-17:12:51] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +189, GPU +344, now: CPU 5902, GPU 4596 (MiB)
[02/23/2022-17:12:51] [TRT] [W] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.2.4
[02/23/2022-17:12:51] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[02/23/2022-17:13:04] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.
[02/23/2022-17:13:40] [TRT] [I] Detected 1 inputs and 2 output network tensors.
[02/23/2022-17:13:40] [TRT] [W] Max value of this profile is not valid
[02/23/2022-17:13:40] [TRT] [W] Min value of this profile is not valid
[02/23/2022-17:13:40] [TRT] [I] Total Host Persistent Memory: 170160
[02/23/2022-17:13:40] [TRT] [I] Total Device Persistent Memory: 2591232
[02/23/2022-17:13:40] [TRT] [I] Total Scratch Memory: 19463168
[02/23/2022-17:13:40] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 11 MiB, GPU 572 MiB
[02/23/2022-17:13:40] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 90.5823ms to assign 23 blocks to 212 nodes requiring 65445400 bytes.
[02/23/2022-17:13:40] [TRT] [I] Total Activation Memory: 65445400
[02/23/2022-17:13:40] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.6.5
[02/23/2022-17:13:40] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 6830, GPU 5120 (MiB)
[02/23/2022-17:13:40] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 6830, GPU 5128 (MiB)
[02/23/2022-17:13:40] [TRT] [W] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.2.4
[02/23/2022-17:13:40] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +6, GPU +43, now: CPU 6, GPU 43 (MiB)
2022-02-23 17:13:41,633 - mmdeploy - INFO - onnx2tensorrt of work_dir_yolox-s_fp32/end2end.onnx success.
2022-02-23 17:13:41,633 - mmdeploy - INFO - visualize tensorrt model start.
2022-02-23 17:13:46,027 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/ubuntu/work/COCO/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
2022-02-23 17:13:46,027 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/ubuntu/work/COCO/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
[02/23/2022-17:13:46] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.6.5
[02/23/2022-17:13:47] [TRT] [W] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.2.4
[02/23/2022-17:13:47] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.6.5
[02/23/2022-17:13:47] [TRT] [W] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.2.4
2022-02-23 17:13:51,509 - mmdeploy - INFO - visualize tensorrt model success.
2022-02-23 17:13:51,509 - mmdeploy - INFO - visualize pytorch model start.
load checkpoint from local path: ../checkpoints/yolox_s_8x8_300e_coco_20211121_095711-4592a793.pth
/home/ubuntu/miniconda3/envs/mm/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/ubuntu/work/COCO/mmdetection/mmdet/models/dense_heads/yolox_head.py:284: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
flatten_bboxes[..., :4] /= flatten_bboxes.new_tensor(
2022-02-23 17:13:59,175 - mmdeploy - INFO - visualize pytorch model success.
2022-02-23 17:13:59,175 - mmdeploy - INFO - All process success.
(mm) ubuntu@y9000p:
loading annotations into memory...
Done (t=0.34s)
creating index...
index created!
2022-02-23 17:14:33,044 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/ubuntu/work/COCO/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
2022-02-23 17:14:33,044 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/ubuntu/work/COCO/mmdeploy/build/lib/libmmdeploy_tensorrt_ops.so
[02/23/2022-17:14:33] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.6.5
[02/23/2022-17:14:34] [TRT] [W] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.2.4
[02/23/2022-17:14:34] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.6.5
[02/23/2022-17:14:34] [TRT] [W] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.2.4
[ ] 0/5000, elapsed: 0s, ETA:Traceback (most recent call last):
File "tools/test.py", line 137, in
main()
File "tools/test.py", line 129, in main
outputs = task_processor.single_gpu_test(model, data_loader, args.show,
File "/home/ubuntu/work/COCO/mmdeploy/mmdeploy/codebase/base/task.py", line 137, in single_gpu_test
return self.codebase_class.single_gpu_test(model, data_loader, show,
File "/home/ubuntu/work/COCO/mmdeploy/mmdeploy/codebase/mmdet/deploy/mmdetection.py", line 142, in single_gpu_test
outputs = single_gpu_test(model, data_loader, show, out_dir, **kwargs)
File "/home/ubuntu/work/COCO/mmdetection/mmdet/apis/test.py", line 53, in single_gpu_test
model.module.show_result(
TypeError: show_result() got an unexpected keyword argument 'bbox_color'
Environment
mmcv mmdeploy mmdetection均最新
Error traceback
If applicable, paste the error trackback here.
A placeholder for trackback.
Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!