-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Closed as not planned
Closed as not planned
Copy link
Labels
converterrelated to ONNX convertersrelated to ONNX convertersstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot
Description
Describe the issue
I've got a ~3GB model produced using tf2onnx.convert.from_function(..., large_model=True, opset=16).
Here's what the large_model option does:
large_model: When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models whose size exceeds the 2 GB.
The onnx file that is produced can't be converted with onnxruntime.tools.convert_onnx_models_to_ort due to an error - I think it expects a 'normal' onnx file.
To reproduce
- Download this: https://drive.google.com/file/d/1--6efTeiLoCCeTifDzABpHySmkJ4Bzkg/view?usp=sharing
- Run this:
python -m onnxruntime.tools.convert_onnx_models_to_ort scheduler_loop_body.onnx
And you'll get this error:
Converting models with optimization style 'Fixed' and level 'all'
Converting optimized ONNX model /content/scheduler_loop_body.onnx to ORT format model /content/scheduler_loop_body.ort
Error converting /content/scheduler_loop_body.onnx: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /content/scheduler_loop_body.onnx failed:Protobuf parsing failed.
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/tools/convert_onnx_models_to_ort.py", line 363, in <module>
convert_onnx_models_to_ort()
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/tools/convert_onnx_models_to_ort.py", line 302, in convert_onnx_models_to_ort
converted_models = _convert(
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/tools/convert_onnx_models_to_ort.py", line 159, in _convert
_ = ort.InferenceSession(
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 397, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /content/scheduler_loop_body.onnx failed:Protobuf parsing failed.
Urgency
Blocks conversion of all models >2GB.
Platform
Linux
OS Version
Ubuntu 18.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.14.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
Metadata
Metadata
Assignees
Labels
converterrelated to ONNX convertersrelated to ONNX convertersstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot