System information (version)
- OpenCV => 4.3
- Operating System / Platform =>Windows
- Compiler => MSVC 16 (VS 2019 Community)
Detailed description
Basically I am trying to run deeplab with mobilnet backbone in dnn on cpu.
I managed to get good results in Python with Tensorflow.
At first I got errors due to not supporting some end layers, but I removed them (basically final argmax) so I got output as 256x256x2 image where after computing argmax by hand between those two channels I should be able to obtain binary mask.
However I get cv::Mat of correct size but whenever I access float values they make no sense
e.g. 361197504.0 where in baseline implementation are like 2.7776315 (min: -5, max: 5).
What's more, the values are not changing much when I change images which makes me think OpenCV returns some junk memory without raising exception.
The final result comes from layer of type ResizeBilinear.
Steps to reproduce
I am attaching the network intended for people semantic segmentation (input is 256x256x3, colors normalized to range (-1,1) ).
dl256.zip
I guess code used to test would be:
auto input_image = cv::imread("a.png");
auto net = cv::dnn::readNetFromTensorflow("dl256.pb");
net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
auto output_names = GetOutputsNames(net);
cv::Mat blob;
cv::dnn::blobFromImage(input_image, blob, 2 / 255.0, cv::Size(256, 256), cv::Scalar(127.5, 127.5, 127.5), true, false);
net.setInput(blob);
auto invalid_values = net.forward(output_names.front()); // there is one output layer
Expected: values in range (-5, 5), values changes for different images or at least exception that inference failed.
Example image: 
Issue submission checklist
System information (version)
Detailed description
Basically I am trying to run deeplab with mobilnet backbone in dnn on cpu.
I managed to get good results in Python with Tensorflow.
At first I got errors due to not supporting some end layers, but I removed them (basically final argmax) so I got output as 256x256x2 image where after computing argmax by hand between those two channels I should be able to obtain binary mask.
However I get cv::Mat of correct size but whenever I access float values they make no sense
e.g. 361197504.0 where in baseline implementation are like 2.7776315 (min: -5, max: 5).
What's more, the values are not changing much when I change images which makes me think OpenCV returns some junk memory without raising exception.
The final result comes from layer of type
ResizeBilinear.Steps to reproduce
I am attaching the network intended for people semantic segmentation (input is 256x256x3, colors normalized to range (-1,1) ).
dl256.zip
I guess code used to test would be:
Expected: values in range (-5, 5), values changes for different images or at least exception that inference failed.
Example image:
Issue submission checklist
answers.opencv.org, Stack Overflow, etc and have not found solution