It is wrong when Converting yolov5s with KSNN conversion tool

Which Khadas SBC do you use?

**Delete this line and post your answer here.**

Which system do you use? Android, Ubuntu, OOWOW or others?

Ubuntu

Which version of system do you use? Khadas official images, self built images, or others?

Khadas official images

Please describe your issue below:

I try to convert yolov5s.pt to yolov5s.nb with the command:

./convert     --model-name    yolov5s    --platform pytorch    --model   /home/liudongbo/A311d/NPU/aml/aml_npu_sdk/acuity-toolkit/python/yolov5/yolov5s.pt    --input-size-list '3,640,640'    --mean-values '103.94 116.78 123.68 0.01700102'    --quantized-dtype asymmetric_affine     --source-files ./data/dataset/dataset0.txt    --kboard VIM3 --print-level 1

However , it failed and return as below:

--+ KSNN Convert tools v1.3 +--


Start import model ...
2023-04-11 15:23:20.510531: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/liudongbo/A311d/NPU/aml/aml_npu_sdk/acuity-toolkit/bin/acuitylib:/tmp/_MEIqyRsDl

2023-04-11 15:23:20.510558: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

I Namespace(config=None, import='pytorch', input_size_list='3,640,640', inputs=None, model='/home/liudongbo/A311d/NPU/aml/aml_npu_sdk/acuity-toolkit/python/yolov5/yolov5s.pt', output_data='Model.data', output_model='Model.json', outputs=None, size_with_batch=None, which='import')

I Start importing pytorch...

[3534633] Failed to execute script pegasus

Traceback (most recent call last):

  File "pegasus.py", line 131, in <module>

  File "pegasus.py", line 112, in main

  File "acuitylib/app/importer/commands.py", line 294, in execute

  File "acuitylib/vsi_nn.py", line 242, in load_pytorch_by_onnx_backend

  File "acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_frontend.py", line 45, in __init__

  File "torch/jit/__init__.py", line 228, in load

RuntimeError: [enforce fail at inline_container.cc:208] . file not found: archive/constants.pkl

What can i do next ? could anybody help me please ?

Post a console log of your issue below:


**Delete this line and post your log here.**

@liudongbo This is the same as the error reported by your previous model, because the torch version is consistent with the SDK version. It is suggested that you can convert it into an onnx model, and then use the SDK to convert it.

Hi ,
I have already coverted yolov5s.pt to yolov5s.onnx and run command:
./convert --model-name yolov5s.onnx --platform onnx --model /home/liudongbo/A311d/NPU/aml/aml_npu_sdk/acuity-toolkit/python/yolov5/yolov5s.onnx --mean-values ‘123.675 116.28 103.53 0.01700102’ --quantized-dtype asymmetric_affine --source-files ./data/dataset/dataset0.txt --kboard VIM3 --print-level 1

However , it returns as below:

–+ KSNN Convert tools v1.3 ±-

Start import model …
2023-04-12 09:25:04.837169: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/liudongbo/A311d/NPU/aml/aml_npu_sdk/acuity-toolkit/bin/acuitylib:/tmp/_MEI85B8Tu

2023-04-12 09:25:04.837247: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

I Namespace(import=‘onnx’, input_dtype_list=None, input_size_list=None, inputs=None, model=‘/home/liudongbo/A311d/NPU/aml/aml_npu_sdk/acuity-toolkit/python/yolov5/yolov5s.onnx’, output_data=‘Model.data’, output_model=‘Model.json’, outputs=None, size_with_batch=None, which=‘import’)

I Start importing onnx…

WARNING: ONNX Optimizer has been moved to GitHub - onnx/optimizer: Actively maintained ONNX Optimizer.

All further enhancements and fixes to optimizers will be done in this new repo.

The optimizer code in onnx/onnx repo will be removed in 1.9 release.

W Call onnx.optimizer.optimize fail, skip optimize

I Current ONNX Model use ir_version 8 opset_version 17

I Call acuity onnx optimize ‘eliminate_option_const’ success

/home/liudongbo/A311d/NPU/aml/aml_npu_sdk/acuity-toolkit/bin/acuitylib/acuitylib/onnx_ir/onnx_numpy_backend/ops/split.py:15: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison

if inputs[1] == ‘’:

W Call acuity onnx optimize ‘froze_const_branch’ fail, skip this optimize

I Call acuity onnx optimize ‘froze_if’ success

I Call acuity onnx optimize ‘merge_sequence_construct_concat_from_sequence’ success

I Call acuity onnx optimize ‘merge_lrn_lowlevel_implement’ success

[1057267] Failed to execute script pegasus

Traceback (most recent call last):

File “pegasus.py”, line 131, in

File “pegasus.py”, line 112, in main

File “acuitylib/app/importer/commands.py”, line 245, in execute

File “acuitylib/vsi_nn.py”, line 171, in load_onnx

File “acuitylib/app/importer/import_onnx.py”, line 123, in run

File “acuitylib/converter/onnx/convert_onnx.py”, line 61, in init

File “acuitylib/converter/onnx/convert_onnx.py”, line 761, in _shape_inference

File “acuitylib/onnx_ir/onnx_numpy_backend/shape_inference.py”, line 65, in infer_shape

File “acuitylib/onnx_ir/onnx_numpy_backend/smart_graph_engine.py”, line 70, in smart_onnx_scanner

File “acuitylib/onnx_ir/onnx_numpy_backend/smart_node.py”, line 48, in calc_and_assign_smart_info

File “acuitylib/onnx_ir/onnx_numpy_backend/smart_toolkit.py”, line 636, in multi_direction_broadcast_shape

ValueError: operands could not be broadcast together with shapes (1,3,80,80,0) (1,3,80,80,2)

There is operator conversion failed here. There are two possibilities. One is that the version of onnx does not match. The other reason may be that there are unsupported operators. There is an operator support document in the SDK documentation.