Skip to content

Export to ONNX of nop-squeeze errors out in ONNXRT #36796

@vadimkantorov

Description

@vadimkantorov

Squeeze of dimension 1 is a no-op, since it's not squeezable. However when exported to ONNX it produces an ONNXRT error. Maybe it's a problem in nop squeeze(1) export, maybe in ONNXRT.

import torch
import onnxruntime

bug = True

class Preemphasis(torch.nn.Module):
    def forward(self, signal):
        if bug:
            signal = signal.squeeze(1)
        signal = torch.cat([signal[..., :1], signal[..., 1:] - 0.97 * signal[..., :-1]], dim = -1)
        return signal

frontend = Preemphasis()
input = torch.rand(16, 8000)
torch.onnx.export(frontend, (input,), 'model.onnx', opset_version = 10, export_params = True, do_constant_folding = True, input_names = ['signal'], output_names = ['output'], dynamic_axes = dict(signal = {0 : 'B', 1 : 'T'}, output = {0 : 'B', 1: 'T'}))
(output_, ) = onnxrt_session.run(None, dict(signal = input.cpu().numpy()))

output = frontend(input)
assert torch.allclose(output.cpu(), torch.from_numpy(output_), rtol = 1e-02, atol = 1e-03)

produces

Traceback (most recent call last):
  File "bug.py", line 16, in <module>
    onnxrt_session = onnxruntime.InferenceSession('model.onnx')
  File "/miniconda/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 25, in __init__
    self._load_model(providers)
  File "/miniconda/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 43, in _load_model
    self._sess.load_model(providers)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (Slice_15) Op (Slice) [ShapeInferenceError] Input axes has invalid data

For Netron: model_bug.onnx.gz

With bug = False, it produces:

2020-04-17 11:20:09.740165610 [I:onnxruntime:Default, bfc_arena.cc:238 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:10 rounded_bytes:512000
2020-04-17 11:20:09.740276922 [I:onnxruntime:Default, bfc_arena.cc:122 Extend] Extended allocation by 2097152 bytes.
2020-04-17 11:20:09.740352834 [I:onnxruntime:Default, bfc_arena.cc:126 Extend] Total allocated bytes: 3145728
2020-04-17 11:20:09.740391982 [I:onnxruntime:Default, bfc_arena.cc:129 Extend] Allocated memory at 0x7eff98315040 to 0x7eff98515040
2020-04-17 11:20:09.741222106 [W:onnxruntime:, execution_frame.cc:343 AllocateMLValueTensorPreAllocateBuffer] Shape mismatch attempting to re-use buffer. {16,7999} != {16,1}. Validate usage of dim_value (values should be > 0) and dim_param (all values with the same string should equate to the same size) in shapes in the model.

(last message is probably because of ONNXRT bug)

For Netron: model_nobug.onnx.gz

>>> import onnxruntime
>>> onnxruntime.__version__
'1.2.0'
>>> import torch
>>> torch.__version__
'1.6.0.dev20200417'

cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: onnxRelated to torch.onnxtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions