Convert ONNX model

Which Khadas SBC do you use?

VIM 3L

Which system do you use? Android, Ubuntu, OOWOW or others?

Ubuntu desktop

Which version of system do you use? Khadas official images, self built images, or others?

Official khadas image

Please describe your issue below:

I am trying to conver an ONNX model to use in inference in my khadas VIM 3L device.

I’m trying the convert tool under aml_npu_sdk/acuity-toolkit/python/ running it like this:

./convert --platform onnx --model ~/Escritorio/khadas/20210203-embedding-model.onnx \
--model-name 20210203-embedding-model --input-size-list '112,112,3' \
--quantized-dtype asymmetric_affine, --kboard VIM3L --print-level 1 \
--mean-values '128,128,128,0' --source-files ./dataset_wtf.txt --outputs '512'

Post a console log of your issue below:

I’m getting this error:


[28167] Failed to execute script pegasus
Traceback (most recent call last):
File "pegasus.py", line 131, in 
File "pegasus.py", line 112, in main
File "acuitylib/app/importer/commands.py", line 245, in execute
File "acuitylib/vsi_nn.py", line 171, in load_onnx
File "acuitylib/app/importer/import_onnx.py", line 108, in run
TypeError: object of type 'NoneType' has no len()

I don’t understand the convert tool very well for ONNX, can you provide an example of how to use it? I’m trying to convert a modified resnet-101 model.

@Inigo_Arribillaga Please make sure your model train with onnx 1.4.1.

Thank you for your reply Frank, I have another issue.

I’m trying to execute 2_export_case_code.sh from acuity-toolkit/demo/, and I think I’m missing some packages in my PC as I’m getting the following error:

gcc -Wall -std=c++0x -I. -I../bin/vcmdtools/vsimulator/include/ -I../bin/vcmdtools/vsimulator/include/CL -I../bin/vcmdtools/vsimulator/include/VX -I../bin/vcmdtools/vsimulator/include/ovxlib -I../bin/vcmdtools/vsimulator/include/jpeg -D__linux__ -DLINUX -O3 -c vnn_pre_process.c
cc1: warning: command line option ‘-std=c++11’ is valid for C++/ObjC++ but not for C
vnn_pre_process.c:15:10: fatal error: vsi_nn_pub.h: No existe el archivo o el directorio
 #include "vsi_nn_pub.h"
          ^~~~~~~~~~~~~~
compilation terminated.
/home/dasnano/Escritorio/khadas/makefile.linux:53: fallo en las instrucciones para el objetivo 'vnn_pre_process.o'
make: *** [vnn_pre_process.o] Error 1
E Fatal model compilation error: 512
W ----------------Error(1),Warning(0)----------------
Traceback (most recent call last):
  File "pegasus.py", line 131, in <module>
  File "pegasus.py", line 116, in main
  File "acuitylib/app/exporter/commands.py", line 186, in execute
  File "acuitylib/vsi_nn.py", line 658, in export_ovxlib
  File "acuitylib/app/exporter/ovxlib_case/export_ovxlib.py", line 74, in run
  File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 695, in generate
  File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 654, in _gen_special_case
  File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 541, in _gen_nb_file
  File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 343, in _compile_linux
  File "acuitylib/acuitylog.py", line 263, in e
acuitylib.acuityerror.AcuityError: ('Fatal model compilation error: 512', 'nbg_compile')
[5358] Failed to execute script pegasus

@Inigo_Arribillaga Are you sure you are executing this script in the acuity-toolkit/demo directory?

Hello Frank, yes I am.

In the end I managed to get it to work by creating a docker using the Dockerfile provided in the SDK.

One more question. I need my model to work with float16 data type. Does this mean I have to skip quantization (1_quantize_model.sh) or is there a setting to quantize the model to work with float16 numbers?

@Inigo_Arribillaga You can refer to the conversion documentation, which has an introduction to floating point types