Models other than Inception/YOLO/etc

I was hoping to use the VIM3 NPU to run a model of my own, which is not the same architecture as these. In particular, it’s a resnet18 except with 1D convolutions instead of 2D convolutions. It is not a model for image recognition, but state recognition based on IMU data. I have not been able to find any documentation that discusses models other than those named, and in particular what kinds of arguments should go into the three example shell scripts.

Can someone tell me whether this is even possible, or point me toward some documentation?

I do have more specific questions but thought I should make sure it’s even possible first of all.

@colin-broderick The conversion script is valid for any model . But you need to know the relevant parameters. I don’t know which platform you use for training . Let’s use TF as an example . You can use summarize_graph tools to get all the parameters you need . Then, in the second srcipt , After the conversion, a preprocessing is completed . You can see the result through the log . The third script finishes the post-processing and gets the corresponding C interface code . If the model you use is not similar to our demo model, you need to write your own C source code to call these interfaces .

3 Likes

Thanks Frank, I think I’ve managed to work out the relevant arguments. Another issue now though. Running 0_import_model.sh fails with

Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.

Note that I had to convert the model from torch to tensorflow via onnx, but that seemed to work fine. I had to use tensorflow==1.14.0, onnx==1.5.0 and onne_tf==1.3.0 to get it to work at all though. Any tips?

Full output when calling 0_import_model.sh:

I Current TF Model producer version 38 min consumer version 0 bad consumer version []
I short-cut convolution_9/Squeeze:out0 - transpose_29:in0 skip concat_9
I short-cut convolution_13/Squeeze:out0 - transpose_41:in0 skip concat_13
I short-cut convolution_10/Squeeze:out0 - transpose_32:in0 skip concat_10
I short-cut convolution_7/Squeeze:out0 - transpose_23:in0 skip concat_7
I short-cut convolution_12/Squeeze:out0 - transpose_38:in0 skip concat_12
I short-cut convolution/Squeeze:out0 - transpose_2:in0 skip concat
I short-cut convolution_6/Squeeze:out0 - transpose_20:in0 skip concat_6
I short-cut convolution_4/Squeeze:out0 - transpose_14:in0 skip concat_4
I short-cut convolution_16/Squeeze:out0 - transpose_50:in0 skip concat_16
I short-cut convolution_8/Squeeze:out0 - transpose_26:in0 skip concat_8
I short-cut convolution_3/Squeeze:out0 - transpose_11:in0 skip concat_3
I short-cut convolution_1/Squeeze:out0 - transpose_5:in0 skip concat_1
I short-cut convolution_15/Squeeze:out0 - transpose_47:in0 skip concat_15
I short-cut convolution_2/Squeeze:out0 - transpose_8:in0 skip concat_2
I short-cut convolution_18/Squeeze:out0 - transpose_56:in0 skip concat_18
I short-cut convolution_11/Squeeze:out0 - transpose_35:in0 skip concat_11
I short-cut convolution_17/Squeeze:out0 - transpose_53:in0 skip concat_17
I short-cut convolution_14/Squeeze:out0 - transpose_44:in0 skip concat_14
I short-cut convolution_20/Squeeze:out0 - transpose_62:in0 skip concat_20
I short-cut convolution_5/Squeeze:out0 - transpose_17:in0 skip concat_5
I short-cut convolution_19/Squeeze:out0 - transpose_59:in0 skip concat_19
I Have 70 tensors convert to const tensor
['convolution_5/ExpandDims_1:out0', 'batchnorm_4/sub:out0', 'batchnorm_2/sub:out0', 'batchnorm_14/sub:out0', 'batchnorm_15/sub:out0', 'batchnorm_20/sub:out0', 'batchnorm_4/mul:out0', 'batchnorm_8/mul:out0', 'transpose_64:out0', 'convolution/ExpandDims_1:out0', 'batchnorm_18/mul:out0', 'convolution_4/ExpandDims_1:out0', 'batchnorm_16/mul:out0', 'batchnorm_1/mul:out0', 'batchnorm_9/mul:out0', 'batchnorm_3/sub:out0', 'convolution_17/ExpandDims_1:out0', 'batchnorm_3/mul:out0', 'batchnorm_15/mul:out0', 'mul_3:out0', 'batchnorm_11/sub:out0', 'batchnorm_6/sub:out0', 'transpose_65:out0', 'transpose_63:out0', 'batchnorm_19/sub:out0', 'batchnorm_14/mul:out0', 'convolution_2/ExpandDims_1:out0', 'add_8:out0', 'convolution_8/ExpandDims_1:out0', 'mul_1:out0', 'batchnorm_7/sub:out0', 'batchnorm_6/mul:out0', 'convolution_12/ExpandDims_1:out0', 'batchnorm_5/sub:out0', 'batchnorm_5/mul:out0', 'batchnorm/mul:out0', 'convolution_19/ExpandDims_1:out0', 'batchnorm_16/sub:out0', 'batchnorm_2/mul:out0', 'batchnorm_7/mul:out0', 'convolution_20/ExpandDims_1:out0', 'convolution_11/ExpandDims_1:out0', 'batchnorm_1/sub:out0', 'convolution_6/ExpandDims_1:out0', 'batchnorm_10/sub:out0', 'convolution_1/ExpandDims_1:out0', 'batchnorm_18/sub:out0', 'batchnorm_8/sub:out0', 'batchnorm_17/mul:out0', 'convolution_9/ExpandDims_1:out0', 'convolution_18/ExpandDims_1:out0', 'mul_5:out0', 'convolution_13/ExpandDims_1:out0', 'convolution_15/ExpandDims_1:out0', 'batchnorm_12/mul:out0', 'batchnorm_10/mul:out0', 'convolution_7/ExpandDims_1:out0', 'batchnorm_13/mul:out0', 'batchnorm_12/sub:out0', 'batchnorm_20/mul:out0', 'batchnorm_9/sub:out0', 'batchnorm/sub:out0', 'convolution_16/ExpandDims_1:out0', 'batchnorm_19/mul:out0', 'batchnorm_13/sub:out0', 'convolution_10/ExpandDims_1:out0', 'batchnorm_11/mul:out0', 'convolution_14/ExpandDims_1:out0', 'convolution_3/ExpandDims_1:out0', 'batchnorm_17/sub:out0']
Traceback (most recent call last):
  File "tensorflow/python/framework/importer.py", line 418, in import_graph_def
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]>; NodeDef: convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 2, 1], use_cudnn_on_gpu=true](convolution/ExpandDims, convolution/ExpandDims_1). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "convertensorflow.py", line 62, in <module>
  File "convertensorflow.py", line 58, in main
  File "acuitylib/app/importer/import_tensorflow.py", line 82, in run
  File "acuitylib/converter/convert_tf.py", line 477, in pre_process
  File "acuitylib/converter/tensorflowloader.py", line 102, in pre_proces
  File "acuitylib/converter/tensorflowloader.py", line 627, in calc_2_const
  File "acuitylib/converter/tf_util.py", line 372, in query_tensor
  File "tensorflow/python/util/deprecation.py", line 454, in new_func
  File "tensorflow/python/framework/importer.py", line 422, in import_graph_def
ValueError: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]>; NodeDef: convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 2, 1], use_cudnn_on_gpu=true](convolution/ExpandDims, convolution/ExpandDims_1). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[18969] Failed to execute script convertensorflow

@colin-broderick the conversion script just suppost tensorflow 1.10,the docs in SDK are relevant descriptions

Thanks, I understand. But I had to use 1.14.0 to get the model conversion to work. When you say that only 1.10.0 is supported, does that mean only models produced by 1.10.0?

@colin-broderick You can use tf1.14 to train , but when you use conversion script, you must use tf1.10

Thanks Frank. I downgraded to tf 1.10.0 and tried to run the conversion script, but got the same error as quoted above.

@colin-broderick I think the next version SDK will support tf1.14 . But I’m not sure. It’s not something we can decide

@colin-broderick OK , I know why , when you export inference and freeze you graph, you already need to use 1.10.

I see, unfortunately that’s not something I’m able to do for this particular model. Is there any idea of a timeline for 1.14.0 support?

@colin-broderick No,it’s not up to us to decide. We need support from chip manufacturers and companies developing conversion tools . We are also waiting.

Thanks for your help Frank. We’ll explore whether we can modify the model or rebuild it entirely.

@colin-broderick I think not need to rebuild all . My inception model is train with tf1.14. just export and forzen use tf1.10.

I’ll try that, but if I remember rightly my original problem was that tf1.10 didn’t support a particular layer (batch norm v9) used in the original torch model, so couldn’t export it. Honestly I’m a little fuzzy on the details now. Must get better at keeping notes! I’ll try it again and get back to you.

Thanks again, you’ve been extremely helpful.

@colin-broderick Me too. Now I forget how to export it . But I am sure that the train version is 1.14. You are welcome to discuss with me at any time . This set of tools is a bit cumbersome and outdated .

Well if you hear of any more recent tools feel free to let me know :). I’m also trying to work with Intel’s neural compute stick 1/2 and a gyrfalcontech piece, and having similar levels of success with similarly cumbersome tools.

@colin-broderick I will tell you when I hear of any more recent tools .