-
Notifications
You must be signed in to change notification settings - Fork 707
CoreML: Broadcastable tensor index not supported. #1038
Copy link
Copy link
Closed
Description
I'm trying to convert the pre-trained version of the faster_rcnn_regnetx-3.2GF_fpn_1x_coco model to CoreML. This model raises an issue in the internally used coremltools:
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/typically/Workspace/vbti-plant-morphology/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/Users/typically/Workspace/vbti-plant-morphology/mmdeploy/mmdeploy/apis/pytorch2torchscript.py", line 78, in torch2torchscript
trace(
File "/Users/typically/Workspace/vbti-plant-morphology/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
return self.call_function(func_name_, *args, **kwargs)
File "/Users/typically/Workspace/vbti-plant-morphology/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
return self.call_function_local(func_name, *args, **kwargs)
File "/Users/typically/Workspace/vbti-plant-morphology/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
return pipe_caller(*args, **kwargs)
File "/Users/typically/Workspace/vbti-plant-morphology/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/Users/typically/Workspace/vbti-plant-morphology/mmdeploy/mmdeploy/apis/torch_jit/trace.py", line 137, in trace
model = ct.convert(
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 451, in convert
mlmodel = mil_convert(
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 193, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 220, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 283, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 115, in __call__
return load(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 53, in load
return _perform_torch_convert(converter, debug)
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 100, in _perform_torch_convert
raise e
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 92, in _perform_torch_convert
prog = converter.convert()
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 269, in convert
convert_nodes(self.context, self.graph)
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 92, in convert_nodes
add_op(context, node)
File "/opt/homebrew/Caskroom/miniconda/base/envs/mmlabs/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 3207, in index
raise NotImplementedError("Broadcasable tensor index not supported.")
NotImplementedError: Broadcasable tensor index not supported.I am not fully aware of why this exception is raised, but I believe it has something to do with multi-class non-maximum suppression. The exception is raised because the two inputs from the index operation do not match in shape, see screenshot:
for indice in valid_indices:
if not is_compatible_symbolic_vector(indice.shape, valid_indices[0].shape):
raise NotImplementedError("Broadcasable tensor index not supported.")I'm not certain why the prior and topk operations are used together within the index operation. Perhaps someone could provide me with a lead.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels
