-
Notifications
You must be signed in to change notification settings - Fork 707
When exporting to TensorRT, number of detections is capped at 200 #53
Description
Hello,
Thanks for this nice toolkit - so far it is working as expected, a part from a few minor issues.
After deploying my mmdetection model using the TensorRT backend, I noticed that the number of detections (bbox_count) is capped at 200, even though max_per_img in my model config has been set to 1000.
It looks like the number of detections is capped by mmdeploy; base_static.py config is used as a base for the tensorrt deploy_cfg.
| keep_top_k=100, |
Increasing the variable max_output_boxes_per_class solves the problem. But naturally, there are other parameters in the model cfg that differ from the ones set in the deploy_cfg. Shouldn't the max_output_boxes_per_class and other relevant variables be set from the model config automatically?
What is the correct approach? According to the documentation, I can use the generic deploy configs included by mmdeploy without modification (eg. configs/mmdet/instance-seg/instance-seg_tensorrt-fp16_dynamic-320x320-1344x1344.py, as I have been using up to this point). Should I make my own deploy_cfg instead?