Apologies, I understand that these binaries are not open source, but I can’t find anywhere else to look for support on them.
After struggling and giving up converting a model to from torch to tf, and then running the converter on the tf, I’m trying to convert torch to onnx, and then run the converter on the onnx.
As far as I can tell, the model converts correctly. I’ve used the appropriate versions of all python packages. The onnx model checker reports nothing, and it passes comparison tests when run with onnxruntime.
Trying to run 0_import_model.sh gives:
I Current ONNX Model use ir_version 4 opset_version 9
D This op Conv of Conv_0 not able get out tensor Conv_0:out0 shape
I build output layer attach_Gemm_80:out0
I Try match Gemm_80:out0
I Match r_gemm_2_fc_wb [['Initializer_11', 'Initializer_10', 'Gemm_80']] [['Gemm', 'Constant_0', 'Constant_1']] to [['fullconnect']]
Traceback (most recent call last):
File "convertonnx.py", line 25, in <module>
File "convertonnx.py", line 20, in main
File "acuitylib/app/importer/import_onnx.py", line 44, in run
File "acuitylib/converter/convert_onnx.py", line 985, in match_paragraph_and_param
File "acuitylib/converter/convert_onnx.py", line 886, in _onnx_build_acu_layer
File "acuitylib/converter/convert_onnx.py", line 857, in _onnx_acu_param_assign
File "acuitylib/converter/convert_onnx.py", line 849, in _onnx_acu_blob_assign
File "acuitylib/converter/convert_onnx.py", line 842, in _onnx_parase_execute
File "<string>", line 1, in <module>
File "acuitylib/converter/convert_onnx.py", line 604, in fc_weight
File "acuitylib/converter/convert_onnx.py", line 570, in shape_pick
KeyError: 0
[26660] Failed to execute script convertonnx
Any help would be much appreciated.