Skip to content

tf_text_graph_ssd inappropriately removing preprocessing nodes in text graph #19354

@LupusSanctus

Description

@LupusSanctus
System information (version)
  • OpenCV => 4.5.1-dev
  • Operating System / Platform => Ubuntu 18.04.5 64 Bit
Detailed description

OpenCV tf_text_graph_ssd.py generates incorrect text graph, inappropriately removing preprocessing nodes (del graph_def.node[i]) and as a result, graph_def.node[1].input.append(graph_def.node[0].name) doesn't affect text graph. Thus, we get the second node as follows (without preprocessing and input: "image_tensor" line):

node {
  name: "FeatureExtractor/.../Conv/Conv2D"
  op: "Conv2D"
  input: "FeatureExtractor/.../Conv/weights"
  attr {
...

instead of:

node {
  name: "Preprocessor/mul"
  op: "Mul"
  input: "image_tensor"
  input: "Preprocessor/mul/x"
}

It causes an error in tf_importer.cpp: Input layer not found: FeatureExtractor/.../Conv/weights in function 'connect'.

The problem doesn't occur when preprocessing blocks precede feature extractor (for example, ready-to-use TF detection SSD Mobilenet models from http://download.tensorflow.org/models/object_detection/<model.tar.gz>):
ready_ssd_mobilenet

This happens in cases when preprocessing blocks follow after model input (for example, TF detection SSD Mobilenet models trained with model_main<_tf2> and exported with exporter_main_v2.py or export_inference_graph.py):
trained_exported_ssd_mobilenet

Steps to reproduce

Steps for SSD Mobilenet V2 example:
ssd_mobilenet_v2.zip

  1. Customize pipeline.config (num_classes, num_steps, label_map_path, input_path), which can be found in archive. Run (using TF1.5) the below command:
python object_detection/model_main.py --model_dir="<YOUR_DIR_PATH>/training" --pipeline_config_path="<YOUR_PATH_TO_CONFIG>/pipeline.config"
  1. After successful model_main.py execution it's needed to export the inference graph running the following script:
import os
import re
import numpy as np

model_dir = '<YOUR_DIR_PATH>/training'
# this dir for model export should be created
output_directory = '<YOUR_DIR_PATH>/training/exported_model'

lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps = np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
pipeline_fname = '<YOUR_PATH_TO_CONFIG>/pipeline.config'
last_model_path = os.path.join(model_dir, last_model)

cmd = "python export_inference_graph.py --input_type=image_tensor --pipeline_config_path={} --output_directory={} --trained_checkpoint_prefix={}".format(
    pipeline_fname, output_directory, last_model_path)

os.system(cmd)
  1. As a result, frozen_inference_graph.pb will be generated in <YOUR_DIR_PATH>/training/exported_model (it can be also found in attached archive). Run text graph generation:
python tf_text_graph_ssd.py --input <PATH_TO_FROZEN_GRAPH>/frozen_inference_graph.pb --config <PATH_TO_CONFIG>/pipeline.config --output <PATH_TO_GRAPH>/graph.pbtxt

The results can be compared with available in OpenCV SSD Mobilenet V2 text graph.

WA: add input: "image_tensor" line to the node:

node {
  name: "FeatureExtractor/.../Conv/Conv2D"
  op: "Conv2D"
  input: "image_tensor"
  input: "FeatureExtractor/.../Conv/weights"
  attr {
...

or (if you need built-in preprocessing) add missing preprocessor blocks before "FeatureExtractor/.../Conv/Conv2D" node and add input: <predecessor_node_name> to "FeatureExtractor/.../Conv/Conv2D" node.

Issue submission checklist
  • I report the issue, it's not a question
  • I checked the problem with documentation, FAQ, open issues,
    answers.opencv.org, Stack Overflow, etc and have not found solution
  • I updated to latest OpenCV version and the issue is still there
  • There is reproducer code and related data files: videos, images, onnx, etc

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions