I gathered all (at least I think it’s all) of the things I will need to convert my onnx file to NB but I’m getting an error I’m not sure about. I created a new 0_import_model.sh based on the examples in the Model Transcoding and Running User Guide. I did just the first command and so my shell script looks like this:
#!/bin/bash
NAME=pose_densenet121_body
ACUITY_PATH=../bin/
pegasus=${ACUITY_PATH}pegasus
if [ ! -e "$pegasus" ]; then
pegasus=${ACUITY_PATH}pegasus.py
fi
$pegasus import onnx --model ./model/${NAME}.onnx \
--output-data ${NAME}.data --output-model ${NAME}.json \
--inputs "input" --input-size-list "1, 3, 256, 256"\
--outputs "cmap paf"
When I try and run that (via the convert-in-docker.sh script) I get the following output:
$ sudo ./convert-in-docker.sh
docker run -it --name npu-vim3 --rm -v /home/marc/src/khadas/workspace/aml_npu_sdk:/home/khadas/npu -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro -v /home/root:/home/root numbqq/npu-vim3
2025-02-20 21:18:46.017828: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/khadas/npu/acuity-toolkit/bin/acuitylib
2025-02-20 21:18:46.017855: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I Namespace(import='onnx', input_dtype_list=None, input_size_list='1, 3, 256, 256', inputs='input', model='./model/pose_densenet121_body.onnx', output_data='pose_densenet121_body.data', output_model='pose_densenet121_body.json', outputs='cmap paf', size_with_batch=None, which='import')
I Start importing onnx...
WARNING: ONNX Optimizer has been moved to https://github.com/onnx/optimizer.
All further enhancements and fixes to optimizers will be done in this new repo.
The optimizer code in onnx/onnx repo will be removed in 1.9 release.
W Call onnx.optimizer.optimize fail, skip optimize
I Current ONNX Model use ir_version 6 opset_version 9
I Call acuity onnx optimize 'eliminate_option_const' success
W Call acuity onnx optimize 'froze_const_branch' fail, skip this optimize
I Call acuity onnx optimize 'froze_if' success
I Call acuity onnx optimize 'merge_sequence_construct_concat_from_sequence' success
I Call acuity onnx optimize 'merge_lrn_lowlevel_implement' success
Traceback (most recent call last):
File "pegasus.py", line 131, in <module>
File "pegasus.py", line 112, in main
File "acuitylib/app/importer/commands.py", line 245, in execute
File "acuitylib/vsi_nn.py", line 171, in load_onnx
File "acuitylib/app/importer/import_onnx.py", line 123, in run
File "acuitylib/converter/onnx/convert_onnx.py", line 61, in __init__
File "acuitylib/converter/onnx/convert_onnx.py", line 761, in _shape_inference
File "acuitylib/onnx_ir/onnx_numpy_backend/shape_inference.py", line 65, in infer_shape
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_graph_engine.py", line 70, in smart_onnx_scanner
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_node.py", line 48, in calc_and_assign_smart_info
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_toolkit.py", line 1317, in conv_shape
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_toolkit.py", line 1287, in _conv_shape
IndexError: list index out of range
[9] Failed to execute script pegasus
The same error occurs when I try and use the ksnn python converter:
$ ./convert --model-name torch-jit-export --platform onnx --model /home/marc/src/khadas/workspace/aml_npu_sdk/acuity-toolkit/demo/model/pose_densenet121_body.onnx --input-size-list '1,3,256,256' --inputs input --outputs "'cmap paf'" --mean-values "128 128 128 0.0078125" --quantized-dtype dynamic_fixed_point --qtype int8 --source-files /home/marc/src/khadas/workspace/aml_npu_sdk/acuity-toolkit/demo/mjdataset.txt --kboard VIM3 --print-level 0
--+ KSNN Convert tools v1.4 +--
Start import model ...
2025-02-20 21:32:36.782140: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/marc/src/khadas/workspace/aml_npu_sdk/acuity-toolkit/bin/acuitylib:/tmp/_MEIFLEoqn
2025-02-20 21:32:36.782166: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I Namespace(import='onnx', input_dtype_list=None, input_size_list='1,3,256,256', inputs='input', model='/home/marc/src/khadas/workspace/aml_npu_sdk/acuity-toolkit/demo/model/pose_densenet121_body.onnx', output_data='Model.data', output_model='Model.json', outputs='cmap paf', size_with_batch=None, which='import')
I Start importing onnx...
WARNING: ONNX Optimizer has been moved to https://github.com/onnx/optimizer.
All further enhancements and fixes to optimizers will be done in this new repo.
The optimizer code in onnx/onnx repo will be removed in 1.9 release.
W Call onnx.optimizer.optimize fail, skip optimize
I Current ONNX Model use ir_version 6 opset_version 9
I Call acuity onnx optimize 'eliminate_option_const' success
W Call acuity onnx optimize 'froze_const_branch' fail, skip this optimize
I Call acuity onnx optimize 'froze_if' success
I Call acuity onnx optimize 'merge_sequence_construct_concat_from_sequence' success
I Call acuity onnx optimize 'merge_lrn_lowlevel_implement' success
[20200] Failed to execute script pegasus
Traceback (most recent call last):
File "pegasus.py", line 131, in <module>
File "pegasus.py", line 112, in main
File "acuitylib/app/importer/commands.py", line 245, in execute
File "acuitylib/vsi_nn.py", line 171, in load_onnx
File "acuitylib/app/importer/import_onnx.py", line 123, in run
File "acuitylib/converter/onnx/convert_onnx.py", line 61, in __init__
File "acuitylib/converter/onnx/convert_onnx.py", line 761, in _shape_inference
File "acuitylib/onnx_ir/onnx_numpy_backend/shape_inference.py", line 65, in infer_shape
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_graph_engine.py", line 70, in smart_onnx_scanner
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_node.py", line 48, in calc_and_assign_smart_info
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_toolkit.py", line 1317, in conv_shape
File "acuitylib/onnx_ir/onnx_numpy_backend/smart_toolkit.py", line 1287, in _conv_shape
IndexError: list index out of range
There is no further information given, so I’m unclear on what that error is and how to resolve it. Any advice would be greatly appreciated. Thanks in advance.