Conversion and Inferencing using a Yolov5 model and KSNN

Which Khadas SBC do you use?

Khadas VIM3

Which system do you use? Android, Ubuntu, OOWOW or others?

Ubuntu 20.04

Which version of system do you use? Khadas official images, self built images, or others?

Khadas Official

Please describe your issue below:

I am wanting to convert a standard weighted Yolov5s (current version 6.2) model so that I can run the inferencing using the A311D processor via the KSNN api on the Khadas VIM3. Below is a description of the methods that I have tried.

The first and most obvious was to convert the model from its native format (PyTorch) to the format required for KSNN using the method described both in the documentation and the website. I had downloaded the aml_npu_sdk (using recursive) to a WSL environment and used the following to attempt to convert the model:

$ ./convert \

--model-name detector --platform pytorch \

--model ./yolov5s.pt \

--input-size-list '3,640,640' --inputs input \

--mean-values '103.94 116.78 123.68 0.01700102' \

--quantized-dtype asymmetric_affine \

--source-files ./data/dataset/dataset0.txt \

--kboard VIM3 --print-level 1

The environment has the current version of Yolov5 and its dependencies from requirements.txt installed. The conversion attempt results in the following error:

--+ KSNN Convert tools v1.3 +--

Start import model ...

2022-09-21 00:57:53.562743: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/user/aml_npu_sdk/acuity-toolkit/bin/acuitylib:/tmp/_MEImTbDtD

2022-09-21 00:57:53.562797: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

I Namespace(config=None, import='pytorch', input_size_list='3,640,640', inputs='input', model='yolov5s.pt', output_data='Model.data', output_model='Model.json', outputs=None, size_with_batch=None, which='import')

I Start importing pytorch...

[14650] Failed to execute script pegasus

Traceback (most recent call last):

File "pegasus.py", line 131, in <module>

File "pegasus.py", line 112, in main

File "acuitylib/app/importer/commands.py", line 294, in execute

File "acuitylib/vsi_nn.py", line 242, in load_pytorch_by_onnx_backend

File "acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_frontend.py", line 45, in __init__

File "torch/jit/__init__.py", line 228, in load

RuntimeError: [enforce fail at inline_container.cc:208] . file not found: archive/constants.pkl

There is no clear reason as to why this method doesn’t work and the documentation surrounding the use of this tool is very limited.

My second attempt (suggested from the Khadas forum) was to first convert the Yolov5s model to ONNX and then convert to the format required by KSNN. I started with a clean WSL environment and installed Yolov5 and its dependencies then downloaded the aml_npu_sdk. I converted the yolov5s.pt model to ONNX format using the following from the Yolov5 documentation:

$ python export.py --weights yolov5s.pt --include torchscript onnx

I then converted the ONNX model to the format required by KSNN using the following as per the documentation:

$ ./convert \

--model-name detector \

--platform onnx \

--model ./yolov5s.onnx \

--mean-values '123.675 116.28 103.53 0.01700102' \

--quantized-dtype asymmetric_affine \

--source-files ./data/dataset/dataset0.txt \

--kboard VIM3 --print-level 0

I get the following output from the conversion tool:

--+ KSNN Convert tools v1.3 +--

Start import model ...

Done.import model success !!!

Start to Generate inputmeta ...

Done.Gerate inputmeta success !!!

Start quantize ...

Done.Quantize success !!!

Start export model ...

Done.Export model success !!!

All Done.

This resulted in two files:

detector.nb (8,487 KB) and libnn_detector (160 KB)

Success! Or so I thought… The next step of the process was to try and perform an inference on the Khadas VIM3 using the newly converted models. Starting with a clean environment of Ubuntu 20.04 server (image from the Khadas website), I installed KSNN as described in the documentation. I first tried an ONNX inference using the example provided which results in a successful detection of a goldfish. I then try to run an inference using my model using the following (the image is a 640x640x3 jpg):

$ python3 resnet50.py --model ./detector.nb --library ./libnn_detector.so --picture ./002.jpg --level 2

The following is the last few lines of the output screen:

Run segment 1113. Type: 1, operations: [1114, 1114].

Segment 1113 ended.

Run segment 1114. Type: 1, operations: [1115, 1115].

Segment 1114 ended.

Run segment 1115. Type: 1, operations: [1116, 1116].

Segment 1115 ended.

layer_id: 0 layer name:network_binary_graph operation[0]:unkown operation type target:unkown operation target.

uid: 0

abs_op_id: 0

execution time: 309797 us

[ 1] TOTAL_READ_BANDWIDTH (MByte): 113.313650

[ 2] TOTAL_WRITE_BANDWIDTH (MByte): 202.045716

[ 3] AXI_READ_BANDWIDTH (MByte): 62.035364

[ 4] AXI_WRITE_BANDWIDTH (MByte): 48.191188

[ 5] DDR_READ_BANDWIDTH (MByte): 51.278287

[ 6] DDR_WRITE_BANDWIDTH (MByte): 153.854528

[ 7] GPUTOTALCYCLES: 242247682

[ 8] GPUIDLECYCLES: 157781362

VPC_ELAPSETIME: 310092

*********

Run the 1 time: 315.00ms or 315781.00us

vxProcessGraph execution time:

Total 315.00ms or 315851.00us

Average 315.85ms or 315851.00us

Done. inference time: 0.3375585079193115

resnet50.py:27: RuntimeWarning: overflow encountered in exp

return np.exp(x)/sum(np.exp(x))

resnet50.py:27: RuntimeWarning: invalid value encountered in divide

return np.exp(x)/sum(np.exp(x))

----Resnet50----

-----TOP 5-----

-1: 0.0

-1: 0.0

-1: 0.0

-1: 0.0

-1: 0.0

As can be seen from the output, no predictions are displayed and there is an overflow which can be assumed as a divide by 0. This indicates that there is some sort of error in the model conversion process, however with a lack of documentation on how to use the tools, it is very difficult to try and find a solution.

Ultimately what I would like to have is a verified guide on how to convert and inference using a Yolov5 model via KSNN. Thank you in advance for your time.

@Ingeniero You can refer to the parameters of yolov3 for your conversion parameters, and the documents in the SDK are also applicable to KSNN.

I have already tried it using the conversion parameter, --mean-values for a Darknet model (0 0 0 0.00390625). I obtain the same result.

@Ingeniero Have you checked the output of the model, are they all 0?

Yes all of the outputs are 0.

@Ingeniero

I am not sure where the problem lies. If the results are all 0, there should be a problem with the conversion parameters.