NPU Python API: ksnn v0.1 TEST Version Release(en)

Perfect - thank you! Got the demo working easily. Now to integrate into my project

@Frank one more question is it possible to turn off the printing these messages, its filling my screen so Im losing the info i do want to see printed in the noise. Couldnt see the option in the API documentation? Thanks

“set input time : 0.004715919494628906
Start run graph [1] times…
Run the 1 time: 81.00ms or 81521.00us
vxProcessGraph execution time:
Total 81.00ms or 81582.00us
Average 81.58ms or 81582.00us
get ouput time: 0.0045964717864990234”

1 Like

@birty I will print the information as an option in the next version

1 Like

@Frank Thank you, much appreciated

@birty If you find any bugs, unreasonable places, or suggestions when you test, please let me know as soon as possible

2 Likes

The code is much faster than C++ version. Thanks for the python edition.

I would like to add some suggestions.

  1. Drawing all the objects are unnecessary for most of the use cases. It would be nice if there is an option to choose the objects based on class number in yolo labels.

  2. Supporting more NN models would definitely offer flexibility.

Since the code is open source, could we contribute back like adding object tracker stuff like that

1 Like

@Vignesh_Raja I wll try to do it .

In my plan, it includes making different demos for different platforms

After we release the official version, we will consider directly open source, so that more people can participate

3 Likes

Ive been working on getting object tracking functional - just need some time to work on it! That will definitely be a very useful addition!

1 Like

So does the current version work only for Inception and Yolo models?

Would be really great if a demo can be done for MobileNet architecture. As this is very efficient for embedded edge devices.

@Akkisony It just a demo. You can convert you self model.

@Frank for some models the conversion from .tflite seems to work including the model export, but fails without an error message when trying to generate the library.

There is a single warning for one tensor for which the variables are all zero.

Is the conversion script hiding library generation messages?

@jdrew You can setup the print level. --print-level 1

@jdrew

You can refer to the official 1v.0 version

The print level flag is set to 1, and it seems to work, but I am not getting any output files for some models.
Keras-Application Models converted to .tflite seem to work fine.
Using this command:

./convert --model-name $var --platform tflite --model $a --mean-values ‘127.5,127.5,127.5,127.5’ --quantized-dtype asymmetric _affine --kboard VIM3 --print-level 1

Getting this as output in final few lines:

[TRAINER]Quantization complete.
[TRAINER]Quantization complete.
End quantization…
Dump net quantize tensor table to test_model.quantize
Save net to test_model.data
W ----------------Warning(30)----------------
Done.Quantize success !!!
Start export model …
Done.Export model success !!!

Start generate library…

Afterwards the intermediate files are deleted and no output files are written. No error messages as to what fails in “generate library”.
The warnings are due to some tensor outputs being zero.

Can you tell me which is you model name ? Now there is a small bug in the naming, that is, you can’t use underscores. This problem will be fixed in the next version

1 Like

Can you please add this to documentary until it is fixed, I have struggled a lot with this bug.

UPD: Removing underscores didn’t help, still fails after Start generate library…
Model named in this manner: ModelBackbone192x256.onnx

UPD2: Worked when I used noncapital letters without numbers :confused:

2 Likes

@inarm I will fix it next week