Skip to content

LeakyReLU support in Tensorflow importer in OpenCV DNN module #18247

@rytisss

Description

@rytisss
  • OpenCV => 4.4.0
  • Operating System / Platform => Windows 64 Bit or Ubuntu 18.04
  • Compiler => Visual Studio 2019 or GCC
Detailed description

LeakyReLU is not supported in the OpenCV DNN modules Tensorflow models importer. I am exploring the source code of tf_importer.cpp and I found something similar to the possible LeakyReLU activation read implementation (mentioned from the line 1575):

else if (type == "Mul" || type == "RealDiv")
{
int constId = -1;
for(int ii = 0; ii < layer.input_size(); ++ii)
{
Pin input = parsePin(layer.input(ii));
if (value_id.find(input.name) != value_id.end())
{
constId = ii;
break;
}
}
CV_Assert((constId != -1) || (layer.input_size() == 2));
if (constId != -1)
{
// Multiplication by constant.
CV_Assert(layer.input_size() == 2);
Mat scaleMat = getTensorContent(getConstBlob(layer, value_id));
CV_Assert(scaleMat.type() == CV_32FC1);
if (type == "RealDiv")
{
if (constId == 0)
CV_Error(Error::StsNotImplemented, "Division of constant over variable");
scaleMat = 1.0f / scaleMat;
}
int id;
if (scaleMat.total() == 1) // is a scalar.
{
// Try to match with a LeakyRelu:
// node {
// name: "LeakyRelu/mul"
// op: "Mul"
// input: "LeakyRelu/alpha"
// input: "input"
// }
// node {
// name: "LeakyRelu/Maximum"
// op: "Maximum"
// input: "LeakyRelu/mul"
// input: "input"
// }
StrIntVector next_layers = getNextLayers(net, name, "Maximum");
if (!next_layers.empty())
{
int maximumLayerIdx = next_layers[0].second;
CV_Assert(net.node(maximumLayerIdx).input_size() == 2);
// The input from the Mul layer can also be at index 1.
int mulInputIdx = (net.node(maximumLayerIdx).input(0) == name) ? 0 : 1;
ExcludeLayer(net, maximumLayerIdx, mulInputIdx, false);
layers_to_ignore.insert(next_layers[0].first);
layerParams.set("negative_slope", scaleMat.at<float>(0));
id = dstNet.addLayer(name, "ReLU", layerParams);
}
else
{
// Just a multiplication.
layerParams.set("scale", scaleMat.at<float>(0));
id = dstNet.addLayer(name, "Power", layerParams);
}
}

Although, this code is not near the actual activation function section (I am not stating this, LeakyReLU might not necessarily be here :-) ) :

else if (type == "Abs" || type == "Tanh" || type == "Sigmoid" ||
type == "Relu" || type == "Elu" ||
type == "Identity" || type == "Relu6")
{
std::string dnnType = type;
if (type == "Abs") dnnType = "AbsVal";
else if (type == "Tanh") dnnType = "TanH";
else if (type == "Relu") dnnType = "ReLU";
else if (type == "Relu6") dnnType = "ReLU6";
else if (type == "Elu") dnnType = "ELU";
int id = dstNet.addLayer(name, dnnType, layerParams);
layer_id[name] = id;
connectToAllBlobs(layer_id, dstNet, parsePin(layer.input(0)), id, layer.input_size());
}

Maybe there will be a possibility to move/copy that chunk of the code near the activation function 'if/else' section to enhance OpenCV DNN modules Tensorflow reader with LeakyReLU support?

If it would possibly settle in the mentioned place (near activation functions if/else section) I would work on the implementation :)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions