Skip to content

How to invoke inference_model(model_cfg, deploy_cfg, backend_models, img=frame, device=device)? #200

@Thevakumar-Luheerathan

Description

@Thevakumar-Luheerathan

I converted the ssd_mobilenet model with following command and got some files within a folder

python "$MMDEPLOY_DIR"/tools/deploy.py "$MMDEPLOY_DIR"/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py "$MMPose_Det"/configs/ssd/ssdlite_mobilenetv2_scratch_600e_coco.py "$CHECKPOINT_DIR"/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth demo.jpg --work-dir ./tensorrt --device cuda:0 --dump-info

Then I tried to do the inference with inference_model within a python file. The code is following.

import mmcv
from mmdeploy.apis import inference_model

video_path='resources/test.mp4'

video = mmcv.VideoReader(video_path)
assert video.opened, f'Failed to load video file {video_path}'


det_checkpoint='checkpoints/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth'
model_cfg='configs/ssd/ssdlite_mobilenetv2_scratch_600e_coco.py'
deploy_cfg='configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py'
backend_models='checkpoints/tensorrt/end2end.engine'
device='cuda:0'

for frame in video:
    result = inference_model(model_cfg, deploy_cfg, backend_models, img=frame, device=device)
    print(result)

I got following error.

2022-03-02 04:18:04,221 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /media/luhee/MY FILES/FYP/MMDeploy/build/lib/libmmdeploy_tensorrt_ops.so
2022-03-02 04:18:04,221 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /media/luhee/MY FILES/FYP/MMDeploy/build/lib/libmmdeploy_tensorrt_ops.so
Traceback (most recent call last):
  File "deploy_test.py", line 17, in <module>
    result = inference_model(model_cfg, deploy_cfg, backend_models, img=frame, device=device)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/apis/inference.py", line 33, in inference_model
    model = task_processor.init_backend_model(backend_files)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/codebase/mmdet/deploy/object_detection.py", line 74, in init_backend_model
    model_files, self.model_cfg, self.deploy_cfg, device=self.device)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 737, in build_object_detection_model
    **kwargs)
  File "/home/luhee/anaconda3/envs/fyp3/lib/python3.7/site-packages/mmcv/utils/registry.py", line 212, in build
    return self.build_func(*args, **kwargs, registry=self)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 34, in __build_backend_model
    **kwargs)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 64, in __init__
    backend=backend, backend_files=backend_files, device=device)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 82, in _init_wrapper
    deploy_cfg=self.deploy_cfg)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/codebase/base/backend_model.py", line 61, in _build_wrapper
    engine=backend_files[0], output_names=output_names)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/backend/tensorrt/wrapper.py", line 44, in __init__
    self.engine = load_trt_engine(engine)
  File "/media/luhee/MY FILES/FYP/MMDeploy/mmdeploy/backend/tensorrt/utils.py", line 142, in load_trt_engine
    with open(path, mode='rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'c'

Please clarify me where I got wrong.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions