Failed to execute script convertonnx

Apologies, I understand that these binaries are not open source, but I can’t find anywhere else to look for support on them.

After struggling and giving up converting a model to from torch to tf, and then running the converter on the tf, I’m trying to convert torch to onnx, and then run the converter on the onnx.

As far as I can tell, the model converts correctly. I’ve used the appropriate versions of all python packages. The onnx model checker reports nothing, and it passes comparison tests when run with onnxruntime.

Trying to run 0_import_model.sh gives:

I Current ONNX Model use ir_version 4 opset_version 9
D This op Conv of Conv_0 not able get out tensor Conv_0:out0 shape
I build output layer attach_Gemm_80:out0
I Try match Gemm_80:out0
I Match r_gemm_2_fc_wb [['Initializer_11', 'Initializer_10', 'Gemm_80']] [['Gemm', 'Constant_0', 'Constant_1']] to [['fullconnect']]
Traceback (most recent call last):
  File "convertonnx.py", line 25, in <module>
  File "convertonnx.py", line 20, in main
  File "acuitylib/app/importer/import_onnx.py", line 44, in run
  File "acuitylib/converter/convert_onnx.py", line 985, in match_paragraph_and_param
  File "acuitylib/converter/convert_onnx.py", line 886, in _onnx_build_acu_layer
  File "acuitylib/converter/convert_onnx.py", line 857, in _onnx_acu_param_assign
  File "acuitylib/converter/convert_onnx.py", line 849, in _onnx_acu_blob_assign
  File "acuitylib/converter/convert_onnx.py", line 842, in _onnx_parase_execute
  File "<string>", line 1, in <module>
  File "acuitylib/converter/convert_onnx.py", line 604, in fc_weight
  File "acuitylib/converter/convert_onnx.py", line 570, in shape_pick
KeyError: 0
[26660] Failed to execute script convertonnx

Any help would be much appreciated.

@colin-broderick Maybe you can try to convert this onnx model ? It’s a convertible model .

https://s3.amazonaws.com/onnx-model-zoo/mobilenet/mobilenetv2-1.0/mobilenetv2-1.0.onnx

This can help you check the environment is right or not . I think you should check the evironment frist .

1 Like

Thanks, that model converts fine so I guess the answer is no, it’s not a convertible model. Any tips on how I can identify where the trouble starts?

@colin-broderick Maybe you can give me a link to downloads you model and tell me which frame you used ? I will try to convert it and try do debug . I am not sure what happend and not sure how to solve it .

I’ll have to check whether I have the authority to share the model, but in the meantime I can say where I think some problems might lie, and you can tell me what you think?

The big one is that we’re using 1D convolutions, because we’re working with time series data. The supported layers document says convolutions are supported, but is not more specific than that. Does that seem like a likely blocker? Otherwise we’re using very standard layers, or their 1D variants, such as batchnorm, relu, maxpool.

Edit: I can give you this:


It’s not the original but this is a model successfully converted from torch to onnx using the supported version of onnx.

Thanks

Edit 2:

Chopped of some gemm layers near the end, and the conversion does get further, but breaks elsewhere.

@colin-broderick Maybe google have restricted download of some area , I am sorry I can’t download it … It’s download error not download failed . From my point of view , here are some of my current opinions. I can’t guarantee that they are all correct .

  1. I think there is no problem in the support of conversion tools for convolution .
  2. I think the tool itself supports TF better than many other platforms . This can be seen from the environment required by the development tools, which should be derived from TF to support other platforms
  3. For onnx, I think common models are supported, or common convolutions are supported
  4. The export code of the model is fixed mode, which has some serious problems. If the output is multidimensional or not a simple one-way output, the output code cannot be used directly .
  5. The top5 output of the third script of model transformation is not referential. You can see from the source code that I recently used a CV model. The top5 results are all 0, but the model transformation is correct. I can see a normal and correct output in the second step. This is the most confusing part. Most users think that the top is 0, and the transformation is a failure, But it’s not a good way to judge.

I also have a lot of doubts about this tool. It takes time to get feedback from the chip manufacturer,So I can’t give a definite answer to many questions for the time being

Thanks for the suggestions Frank. I had tried tf but had trouble with it, hence retrying with onnx. I’ll try exploring tensorflow again.

Can you access this file?

@colin-broderick I can access this file . I will try it later.