Tensorflow model conversion considerations

Recently, many people reported that users have asked questions about the conversion of tensorflow models represented by SSDs in the forum. Here are some explanations of some points that should be noted.(最近有不少人反应用户在论坛询问关于以SSD为代表的tensorflow模型转换的问题,这里对一些应该注意的点进行一些说明)

  1. About color channels(关于颜色通道)
  • Most TensorFlow models are RBG models, which is 0 1 2(大部分的TensorFlow模型是RBG模型,也就是0 1 2)
  • But a small part of the model belongs to the BGR model, that is, 2 1 0 needs to be used in the parameter setting.(但是小部分模型是属于BGR模型,也就是在参数设置时需要使用2 1 0)
  • The yolo model in the demo is BGR, so when using the document, pay attention to the order of the color channels of your model.(demo中的yolo模型是BGR的,因此在使用文档是要注意自己的模型的颜色通道顺序)
  1. About channel mean value(关于channel-mean-value参数)
  • This is based on the preset value of the model(这个是根据模型的预设值来的)
  • If it is normalized to [0,1], the value of channel-mean-value is 0 0 0 256. For example, the yolo model in the demo.(如果归一到[0,1],则channel-mean-value的值为0 0 0 256.比如demo中的yolo模型.)
  • If it needs to be normalized to [-1,1], the value of channel-mean-value is 128 128 128 128, such as the MobileNet model.(如果需要归一到[-1,1],则channel-mean-value的值为128 128 128 128,比如MobileNet模型)
  1. About 0_import_model.sh(关于0_import_model.sh脚本文件)
    parameter(参数):--input-size-list and --outputs
  • About these two parameters, you can use TensorFlow’s summary_graph tool, how to install and use the summary_graph tool can refer to here(关于这两个参数,可以使用TensorFlow的summarize_graph工具,如何安装以及使用summarize_graph工具可以参考这里)
    https://github.com/tensorflow/models/tree/master/research/slim
  • Use this tool to look at your model file and you will get these two parameters. When setting, follow the parameters of the model, and you will not get an error when using it.(使用这个工具,查看你的模型文件,就会得到这两个参数,设置时要按照模型的参数来,使用的时候才不会报错)
    Under normal circumstances, the data you see is like this(正常的情况下,看到的数据是这样子的)
    $ bazel-bin/tensorflow/tools/graph_transforms/summarize_graph   -- 
    in_graph=mobilenet_v1_1.0_224_frozen.pb
    Found 1 possible inputs: (name=input, type=float(1), shape=[?,224,224,3]) 
    No variables spotted.
    Found 1 possible outputs: (name=MobilenetV1/Predictions/Reshape_1, op=Reshape) 
    Found 4254891 (4.25M) const parameters, 0 (0) variable parameters, and 0 control_edges
    Op types used: 138 Const, 138 Identity, 27 FusedBatchNorm, 27 Relu6, 15 Conv2D, 13 
    DepthwiseConv2dNative, 2 Reshape, 1 AvgPool, 1 BiasAdd, 1 Placeholder, 1 Shape, 1 Softmax, 1 Squeeze
    To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
    bazel run tensorflow/tools/benchmark:benchmark_model -- -- 
    graph=/home/khadas/tmp/mobilenet_v1_1.0_224_frozen.pb --show_flops --input_layer=input -- 
    input_layer_type=float --input_layer_shape=-1,224,224,3 -- output_layer=MobilenetV1/Predictions/Reshape_1
    
  • But if your model file is incorrect, or other special model is generated, or the model is not frozen, you will see something like this,such a model is unusable.(但是如果你的模型文件不对,或者是其他特殊的模型生成,或者是未冻结的模型,看到的会是类似这样子的,这样的模型是无法使用的)
    $ bazel-bin/tensorflow/tools/graph_transforms/summarize_graph   -- 
    in_graph=frozen_inference_graph.pb
    Found 1 possible inputs: (name=image_tensor, type=uint8(4), shape=[?,?,?,3]) 
    No variables spotted.
    Found 4 possible outputs: (name=detection_boxes, op=Identity) (name=detection_scores, 
    op=Identity) (name=num_detections, op=Identity) (name=detection_classes, op=Identity) 
    Found 6818217 (6.82M) const parameters, 0 (0) variable parameters, and 1540 control_edges
    Op types used: 1856 Const, 549 Gather, 452 Minimum, 360 Maximum, 305 Reshape, 197 Sub, 185 
    Cast, 183 Greater, 180 Split, 180 Where, 140 Add, 135 Mul, 121 StridedSlice, 117 Shape, 115 Pack, 
    108 ConcatV2, 94 Unpack, 93 Slice, 92 ZerosLike, 92 Squeeze, 90 NonMaxSuppressionV2, 35 
    Relu6, 34 Conv2D, 29 Switch, 28 Identity, 26 Enter, 15 RealDiv, 14 Merge, 13 Tile, 13 
    DepthwiseConv2dNative, 12 Range, 12 BiasAdd, 11 TensorArrayV3, 9 ExpandDims, 8 NextIteration, 
    6 TensorArrayWriteV3, 6 TensorArraySizeV3, 6 TensorArrayGatherV3, 6 Exit, 5 
    TensorArrayScatterV3, 5 TensorArrayReadV3, 4 Fill, 3 Assert, 3 Transpose, 2 LoopCond, 2 Less, 2 
    Exp, 2 Equal, 1 Size, 1 Sigmoid, 1 ResizeBilinear, 1 Placeholder, 1 TopKV2
    To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
    bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/khadas/tmp/ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb --show_flops --input_layer=image_tensor --input_layer_type=uint8 --input_layer_shape=-1,-1,-1,3 --output_layer=detection_boxes,detection_scores,num_detections,detection_classes
    
  • If you see more than one output or shape = [?,?,?, 3]), this is an unusable model file(如果你看到了超过一个以上的output或者shape=[?,?,?,3]),这就是一个不可使用的模型文件)

4. About replacing models in demo (关于替换demo中的模型)

  • Just replace the yolo model of different samples, modify it according to the steps of the document, and then directly replace the nb file with the so file.(只是不同样本的yolo模型替换,按照文档的步骤修改,然后生成so文件以后连同nb文件直接替换就可以了)
  • If it is another model, such as an SSD model, when generating the so file, in addition to modifying the model file, you must also modify the process file according to the model’s network.(如果是其他模型,比如SSD模型,那么在生成so文件的时候,除了按照模型文件修改,还要按照模型的网络修改process文件)
5 Likes

Hi:
But if your model file is incorrect, or other special model is generated, or the model is not frozen, you will see something like this,such a model is unusable.(但是如果你的模型文件不对,或者是其他特殊的模型生成,或者是未冻结的模型,看到的会是类似这样子的,这样的模型是无法使用的)…
ssd_mobilenet的tensorflow实现就是不支持吗?

@Sword 关于SSD的模型,这个模型的最后一层,这个工具不支持这层,你如果要用SSD,得手动删除最后一层,在转化以后的代码里,手动实现这一层

Hi Frank:
在https://forum.khadas.com/t/npu-mobilenet-ssd-v2-demo-and-source-code/5989/35帖子里,我有提一些问题,我也有看到您。帖主在模型转换前做这一步:python3 /usr/local/lib/python3.6/dist-packages/object_detection-0.1-py3.6.egg/object_detection/export_tflite_ssd_graph.py --pipeline_config_path pipeline.config --trained_checkpoint_prefix model.ckpt --output_directory out –add_postprocessing_op=false ,就是为了达到您说的去掉后处理这一层?
另外,在1_quantize_model.sh中,参数–channel-mean-value '128 128 128 128’在ssd mobilenet中也是用这个值?tensorflow的实现,我还没查到有做减均值和归一化。请教您怎么设置这些参数?

请问–input-size-list 这个参数可以设置输入的h,w维度是可变的吗

@mumumu 是根据模型设置来的,不是可变的

因为我使用的模型并没有固定输入图片size。有什么方法可以解决吗 :pray:

@mumumu 没有办法,只支持固定一种大小的输入。