Custom onnx Resnet not working

Hello
I’m using VIM3 Pro, KSNN and ResNet model

I’ve moved to ResNet and decided to make my custom model, (with 4 classes) using resnet standard mean and std
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
I exported it to onnx (because pt converting is not working) and checked the model, it works the same as pt.

Then I started converting a model:

~$ ./convert --model-name resnet-onnx-example \
--platform onnx \
--model /home/titan/Desktop/Work/ElCub/resnet18.onnx \
--mean-values '123.675 116.28 103.53 0.01700102' \
--quantized-dtype asymmetric_affine \
--source-files /home/titan/Desktop/Work/ElCub/resnet.txt \
--kboard VIM3 \
--print-level 0 \
--iterations 200

But after I moved it to khadas and started inferencing, the results were too bad
Model was giving a lot of mistakesl

1)What I did wrong and why model inference is so bad?
2)Scale = 1/x*255, where x is my std, correct? (If yes, why you’re using 0.01700102)
(or maybe Scale = x/255, where x is my std)
3)Mean = my_model_mean * 255?
4)If I’m moving to uint16, how to count scale and mean and what else should I change?

waiting for your respond

Hello @Agent_kapo ,

For 2 and 3, in convert tool, preprocess is (image - mean) * scale. image is 0-255. ResNet mean and std maybe divide 255 first.

For 1, KSNN has a big problem for preprocess. That is why int8 and int16 bad before. Maybe this problem is caused by this. This week, the new version will release, you can try the new ResNet demo.

For 4, i feel sorry that KSNN does not support uint16.

Okay, few things I’ve understood, but I still feel uncertain scale
As I understood, in your basic example you were using standard mean and std for ResNet, as you said:

That means that I should devide my std (0.229, 0.224, 0.225) => 0.226/255=0.00088627, but you have 0.01700102 and I cant understand how did you get it.

If I want to use my ResNet version on Khadas without quantanization, what should I do?

Hello @Agent_kapo ,

1/(255 * 0.229)

Sorry, KSNN does not support no-quantization. You can try to use C++.
NPU SDK Usage [Khadas Docs]
Application Source Code [Khadas Docs]

1_quantize_model.sh is quantize model. Do not run this step and run 2_export_case_code.sh, then you will get model without quantization.

Also, about

I’m using uint8 model (how is it connected with int8 and int16?) and why int8 and int16 have big problem in preprocessing?
I’ll be waiting for a new version. It will appear on aml_npu_sdk repo, correct?

Hello @Agent_kapo ,

KSNN previous supporter misunderstood preprocess. In library file use wrong preprocess. So all demos int8 and int16 are wrong. Actually uint8 is wrong either, but for some coincidence, it can get right result.

Both convert tool and KSNN demo will upgrade. I will tell you when it releases.

1 Like

Okay I’ll be waiting for new release
thank you

Hello @Louis-Cheng-Liu

I see a new commit in aml repo:


Is it all or something else will appear soon?

Hello @Agent_kapo ,

The new KSNN has been released.

1 Like