Skip to content

dnn: inputs.size() in function 'getMemoryShapes' #16660

@berak

Description

@berak
  • OpenCV => 4.1.2 / 4.2.0
  • Operating System / Platform => google colab
  • Compiler => python3

related: #14290 #14713 #16323

trying to export a pytorch unet model via onnx.
when importing it into cv2.dnn i met:

error: OpenCV(4.1.2) /io/opencv/modules/dnn/src/dnn.cpp:3873: error: (-215:Assertion failed) inputs.size() in function 'getMemoryShapes' 

problem seems to come from this line:

https://github.com/karoly-hars/DE_resnet_unet_hyb/blob/8883e2f50322fc6680eca2454f0cb72a0cb5d154/network.py#L61

code to reproduce
import torch
import torch.nn as nn
import cv2

def get_incoming_shape(incoming):
    size = incoming.size()
    # returns the incoming data shape as a list
    return [size[0], size[1], size[2], size[3]]

def interleave(tensors, axis):
    old_shape = get_incoming_shape(tensors[0])
    # change the first element (batch_size to 1)
    new_shape = [1] + old_shape[1:]
    # double 1 dimension
    new_shape[axis] *= len(tensors) #### PROBLEM HERE
    
    # pack the tensors on top of each other
    stacked = torch.stack(tensors, axis+1)
    # reshape and return
    reshaped = stacked.reshape(new_shape)
    return reshaped

class Problem(nn.Module):
  def __init__(self):
    super(Problem, self).__init__()

  def forward(self,x):
    return interleave([x,x],2)

def convert_to_onnx(net, output_name):
    input = torch.randn(1, 3, 256, 320)
    input_names = ['data']
    output_names = ['output']
    net.eval()
    torch.onnx.export(net, input, output_name, verbose=True, input_names=input_names, output_names=output_names)

model = Problem()
convert_to_onnx(model, "problem.onnx")
net = cv2.dnn.readNet("problem.onnx")

\```

</p>
</details>


<p>.

<details><summary>onnx output</summary>
<p>

graph(%data : Float(1, 3, 256, 320)):
%1 : Tensor = onnx::Shape(%data)
%2 : Tensor = onnx::Constantvalue={1}
%3 : Long() = onnx::Gather[axis=0](%1, %2) # :6:0
%4 : Tensor = onnx::Shape(%data)
%5 : Tensor = onnx::Constantvalue={2}
%6 : Long() = onnx::Gather[axis=0](%4, %5) # :6:0
%7 : Tensor = onnx::Shape(%data)
%8 : Tensor = onnx::Constantvalue={3}
%9 : Long() = onnx::Gather[axis=0](%7, %8) # :6:0
%10 : Long() = onnx::Constantvalue={2}
%11 : Long() = onnx::Mul(%6, %10)
%12 : Tensor = onnx::Unsqueezeaxes=[3]
%13 : Tensor = onnx::Unsqueezeaxes=[3]
%14 : Float(1, 3, 256, 2, 320) = onnx::Concat[axis=3](%12, %13) # :18:0
%15 : Long() = onnx::Constantvalue={1}
%16 : Tensor = onnx::Unsqueezeaxes=[0]
%17 : Tensor = onnx::Unsqueezeaxes=[0]
%18 : Tensor = onnx::Unsqueezeaxes=[0]
%19 : Tensor = onnx::Unsqueezeaxes=[0]
%20 : Tensor = onnx::Concat[axis=0](%16, %17, %18, %19)
%output : Float(1, 3, 512, 320) = onnx::Reshape(%14, %20) # :20:0
return (%output)

```

%11 : Long() = onnx::Mul(%6, %10)

is the one with missing inputs here. are "plain maths ops" in a pytorch graph a problem ?

you can check the exported model here: problem.onnx

p.s. i was able to solve it, by letting pytoch infer the reshape param like

new_shape = [1, old_shape[1], -1, old_shape[3]]
y = x.reshape(new_shape)

thus avoiding the multiplication

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions