Conversation
|
This patch should go into 3.4 branch first. We will merge changes from 3.4 into master regularly (weekly/bi-weekly). The same note is about opencv_extra's PR. So, please:
Note: no needs to re-open PR, apply changes "inplace". |
|
Hey @alalek Its done, thanks for your clear instructions! :). I assume this procedure must be done every time I open a PR, just in case 3.4 branch has changed, yes? |
Only if conflicts observed and automatic merge is not possible. Main problem here is to select "target branch for the patch" initially. |
|
@dvd42, I were able to use the following code to pass your test case: else if (layer_type == "InstanceNormalization")
{
// InstanceNormalization = MVN + BatchNorm
{
LayerParams mvnLP;
mvnLP.name = layerParams.name + "/mvn";
mvnLP.type = "MVN";
mvnLP.set("eps", layerParams.get<float>("epsilon"));
layerParams.erase("epsilon");
int id = dstNet.addLayer(mvnLP.name, mvnLP.type, mvnLP);
// Connect to input
layerId = layer_id.find(node_proto.input(0));
CV_Assert(layerId != layer_id.end());
dstNet.connect(layerId->second.layerId, layerId->second.outputId, id, 0);
// Add shape
layer_id.insert(std::make_pair(mvnLP.name, LayerInfo(id, 0)));
outShapes[mvnLP.name] = outShapes[node_proto.input(0)];
// Replace Batch Norm's input to MVN
node_proto.set_input(0, mvnLP.name);
}
layerParams.type = "BatchNorm";
Mat mean(3, 1, CV_32F, Scalar(0));
Mat std(3, 1, CV_32F, Scalar(1));
Mat weights = getBlob(node_proto, constBlobs, 1);
Mat bias = getBlob(node_proto, constBlobs, 2);
layerParams.blobs.resize(4);
layerParams.blobs[0] = mean;
layerParams.blobs[1] = std;
layerParams.blobs[2] = weights;
layerParams.blobs[3] = bias;
layerParams.set("has_bias", true);
layerParams.set("has_weight", true);
}Please check it and try to find the difference with your implementation. |
|
|
||
| layerParams.blobs.resize(2); | ||
|
|
||
| if (!node_proto.input(1).empty()) { |
There was a problem hiding this comment.
If node_proto.input_size() == 3 can we have an empty input name?
There was a problem hiding this comment.
In my experience no. However the same thing happens for the BatchNorm layer, and I was just trying to follow the same convention, also felt like an extra sanity check wouldnt hurt. Let me know if I should remove it
There was a problem hiding this comment.
Let's assume that they aren't empty.
| layerId = layer_id.find(node_proto.input(0)); | ||
| CV_Assert(layerId != layer_id.end()); | ||
| dstNet.connect(layerId->second.layerId, layerId->second.outputId, id, 0); | ||
| shapeIt = outShapes.find(node_proto.input(0)); |
| node_proto.set_input(0, mvnParams.name); | ||
|
|
||
| layerParams.type = "BatchNorm"; | ||
|
|
There was a problem hiding this comment.
please avoid extra changes here and below (one empty line added and one removed)
b518adf to
c98f1da
Compare
bd43273 to
7661f80
Compare
Instancenorm onnx (opencv#14858) * Onnx unsupported operation handling * instance norm implementation * Revert "Onnx unsupported operation handling" * instance norm layer test * onnx instancenorm layer
Merge with extra: opencv/opencv_extra#627
This Pull request is an implementation of the InstaceNorm operation for the ONNX importer. I am opening this draft PR with the hopes of getting some reviewers to look at the issue I ve been having with it.
The first Issue is that I use the
cv::meanStdDevfunction from OpenCv inside the layer, this operation messes up the Std computation, making it differ from the one computed by the one in my test model (implemented in PyTorch). Making it fail thenormAsserttest in my test case.The second issue is that this implementation seems to break the MVN layer test for some Caffe models. Which is very strange given the fact that I have not touched that layer implementation.
The implementation seems to be correct except for this precision issue, so I am looking for some help regarding both these issues.
PS: The only reason I am opening the PR and not a separate issue is because this way the reviewers can easily see all the files that I have changed.