Skip to content

Get NaN when inference onnx model #20280

@wswday

Description

@wswday
System information (version)
  • OpenCV => 4.4.0
  • Operating System / Platform == Windows 10 64 Bit
  • Compiler == Visual Studio 2015
Detailed description

I guess it relate to fusing BatchNormalization
Places to check: #16887

Steps to reproduce
int main(int argc, char** argv)
{
  cv::dnn::Net net = cv::dnn::readNetFromONNX("model.onnx");

  cv::Mat img(512, 512, CV_8UC3, cv::Scalar(128, 128, 128));
  cv::Mat inputBlob = cv::dnn::blobFromImage(img, 1.0, cv::Size(), cv::Scalar(123.675, 116.28, 103.53), true);
  net.setInput(inputBlob);

  cv::Mat target = net.forward("output0");

  float *data = (float *)target.data;
  for (int i = 0; i < 10; ++i) {
      printf("%f\n", data[0]);
  }
  return 0;
}
onnx model

model.zip

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions