Skip to content

Keras model included, Dense Layer MemoryShape error #15296

@Julia702

Description

@Julia702
  • OpenCV >= 4.1.0

I have a problem using dnn module.

I trained my model and generated a .pb file. I loaded the model wtih readNetFromTensorflow and all worked well.

Then, I tried to expand my model with the SE block of the sequeeze and excitation network. Same as before, I generated my .pb file successfully. I also load it with readNetFromTensorflow again, but by run net.forward() some errors occurred.

error: OpenCV(4.1.0) /io/opencv/modules/dnn/src/layers/eltwise_layer.cpp:116: error: (-215:Assertion failed) inputs[0] == inputs[i] in function 'getMemoryShapes'

net = cv2.dnn.readNetFromTensorflow(modelPath) img = cv2.imread('image.bmp',0) img_blob = cv2.dnn.blobFromImage(img, size=(256, 256), swapRB=True, crop=False) net.setInput(img_blob) net.forward()

The problem seems to be in the keras dense layer / opencv eltwise layer. My Input is a 2-D Tensor with shape (None, 32).

Defined SE-Block:

`def squeeze_excite_block(input_x, ratio=16):
if K.image_data_format() == 'channels_first':
channel_axis = 1
dim_axis = -1
else:
channel_axis = -1
dim_axis = 1
filters = int(input_x.get_shape()[channel_axis])
dim = int(input_x.get_shape()[dim_axis])

    sequeeze = GlobalAveragePooling2D()(input_x)  
    excitation = tf.keras.layers.Dense(units=filters // ratio, activation='relu')(sequeeze)        
    excitation = tf.keras.layers.Dense(units=filters, activation='sigmoid')(excitation)  
    
    excitation = tf.keras.layers.Reshape((1,1,filters))(excitation)
    excitation = tf.keras.layers.UpSampling2D(size=(dim, dim))(excitation)

    if K.image_data_format() == 'channels_first':
        excitation = tf.keras.layers.Permute((3, 1, 2))(excitation)
    scale = tf.keras.layers.Multiply()([input_x, excitation])     
       
    return scale
`

Used model:

` def unet(pretrained_weights = None,input_size = (256,256,1)):

reduction_ratio = 16

inputs = Input(input_size, name='Input')

conv1 = Conv2D(16, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(16, 1, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

conv2 = Conv2D(24, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(24, 1, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

conv3 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

conv4 = Conv2D(48, 4, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(48, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)

up7 = Conv2D(32, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop4))
#merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(32, 4, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(up7)
conv7 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)

up8 = Conv2D(24, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(24, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(24, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)

up9 = Conv2D(16, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
print(K.dtype(merge9))
merge9 = squeeze_excite_block_own(merge9, ratio=reduction_ratio)
print(K.dtype(merge9))
conv9 = Conv2D(12, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(12, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = Conv2D(6, 1, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(3, 1, activation = 'softmax', name='Output')(conv9)

model = Model(inputs,conv10)
model.compile(optimizer = Adam(lr = 1e-4), loss =dice_coef_multilabel, metrics = ['accuracy', my_iou_metric]) 

return model

`

I also tried an additional reshape or flatten layer before the dense layer, but the error still occurs.

Thanks for your help.

.pb file:
unet_SE.zip

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions