Skip to content

C++ API producing incorrect model metaparams #34277

@DocDriven

Description

@DocDriven

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary):
  • TensorFlow version (use command below): 2.0 and 1.15
  • Python version:
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

Describe the current behavior

An autoencoder model consisting only of standard keras Dense layers is converted into a tflite model. This model can be loaded and inspected with the Python API. The output there is consistent with the output from the visualize.py script.

Input detail:  {'name': 'input_1', 'index': 1, 'shape': array([ 1, 90], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}
Output detail:  {'name': 'Identity', 'index': 0, 'shape': array([ 1, 90], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}

When loading the very same model with the C++ API, I get ridicolous large results for the number of inputs/outputs/nodes.

The C++ functions that were used to inspect the model are:

std::unique_ptr<tflite::Interpreter> interpreter = BuildInterpreter(*model);

LOG(INFO) << "tensors size: " << interpreter->tensors_size() << std::endl;
LOG(INFO) << "nodes size: " << interpreter->nodes_size() << std::endl;
LOG(INFO) << "inputs: " << interpreter->inputs().size() << std::endl;
LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << std::endl;
LOG(INFO) << "outputs: " << interpreter->outputs().size() << std::endl;
LOG(INFO) << "output(0) name: " << interpreter->GetOutputName(0) << std::endl;

int t_size = interpreter->tensors_size();
for (int i = 0; i < t_size; i++) {
  LOG(INFO) << i << ": " << interpreter->tensor(i)->name << ", " 
            << interpreter->tensor(i)->bytes << ", "
            << interpreter->tensor(i)->type << ", "
            << interpreter->tensor(i)->params.scale << ", "
            << interpreter->tensor(i)->params.zero_point << std::endl;
}
std::cout << "End of test" << std::endl;

This produces the following output:

tensors size: 21
nodes size: 11936128518282651046
inputs: 25344
input(0) name: Identity
outputs: 18446744073709501604
output(0) name: Identity
0: Identity, 360, 1, 0, 0
1: input_1, 360, 1, 0, 0
2: model/dense/MatMul/ReadVariableOp/transpose, 3600, 9, 0.00187181, 0
3: model/dense/MatMul_bias, 160, 1, 0, 0
4: model/dense/Relu, 160, 1, 0, 0
5: model/dense_1/MatMul/ReadVariableOp/transpose, 1600, 1, 0, 0
6: model/dense_1/MatMul_bias, 40, 1, 0, 0
7: model/dense_1/Relu, 40, 1, 0, 0
8: model/dense_2/MatMul/ReadVariableOp/transpose, 1600, 1, 0, 0
9: model/dense_2/MatMul_bias, 160, 1, 0, 0
10: model/dense_2/Relu, 160, 1, 0, 0
11: model/dense_3/MatMul/ReadVariableOp/transpose, 3600, 9, 0.00208381, 0
12: model/dense_3/MatMul_bias, 360, 1, 0, 0
13: End of test

The code to create the tflite model to inspect can be found on my repo (https://github.com/DocDriven/tflite-cpp-api-tests). All relevant files are named simple_ae.*.

I suspect the C++ API to be broken at some point, as the models seem to be fine. Same results for TF 1.x and TF2.0. Trying different models yields the exact same ridicolous values, independently from their size.

Metadata

Metadata

Assignees

Labels

TF 2.0Issues relating to TensorFlow 2.0comp:liteTF Lite related issuesstaleThis label marks the issue/pr stale - to be closed automatically if no activitystat:awaiting responseStatus - Awaiting response from authortype:bugBug

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions