About model convert parmeters

Which Khadas SBC do you use?

VIM3 A311D

Which system do you use? Android, Ubuntu, OOWOW or others?


Which version of system do you use? Khadas official images, self built images, or others?

Khadas official images

Please describe your issue below:

Sorry, I encountered trouble when converting the model.
Because my converted model output is very different from the output of pytorch and onnx,
I am troubleshooting the problem in order

About the settings of mean & std :


on pytorch, I use the function in the picture to do image pre-processing

1.mean is [0.485, 0.456, 0.406], so I multiply the mean by 255, which becomes
[123.675 116.28 103.53]. Then the image array will change from [0,255] to [0,1],
so my scale is 1/255, that is 0.00392156.
So the parameter is below, right?

--channel-mean-value  "123.675 116.28 103.53 0.00392156" right?

Next is std, I don’t see any parameters available in the documentation,
how should I deal with it?

Thanks for the help!

This is for the RGB color channels, the mean across those channels.

Please check this to see how you can find the mean values for your input examples,

you can get the mean values from your training set for example.


std is standard deviation, you can follow it according to method of finding mean of input images, mentioned above, and apply similar way to get the standard deviation as well.

P.S you have check the transcode document right ?

Thanks for reply

Before that, please help me confirm whether this parameter is equal to the following function

--mean-values '123.675, 116.28, 103.53 0.00392156'
mean=[123.675, 116.28, 103.53]
img = (img - mean) * 0.00392156

If the parameters converted by the tool are not the same as my own image pre-processing function, I will go back and study it again.

If it is the same, then I may have encountered a problem when ruuning 0_import_model.sh
Because after I run inference.sh next,
I get iter_0_attach_Concat_Concat_175_out0_0_out0_1_3_3.tensor,
I checked the results of inference, which are far different from the results of using onnx.

This is my 0_import_model.sh

I have followed the documentation to confirm and directly specify the input and output nodes.
But I still can’t get similar output. Is there any way I can improve it?


Yes it seems you are doing the preprocessing step of centering.

Your dataset may have not been pre-processed before training, have you trained your model with normalized images ?

Although the model was not trained by me, it was trained using the author Thohemp’s 6DRepNet.
But I have studied the source code, whether in training or inference code, the following functions are used to process the dataset.

Are you using the same method to process the inference sample image? Maybe it seems your model just have poor performance post-quantization.