Convert own yolov4 model

I have own YOLOv4 model and it works well wen I launch it on PC (OpenCV + DNN neuro framework)

I converted it to NPU (using docs), and result is worse: cant find (or can seldom) some object classes.

I think that i set wrong parametrs in the process of converting.

For example: “channel-mean-value”, in docs i saw “channel-mean-value ‘0 0 0 256’”, when i set this i got wrong recognition results. Then i set another value “channel-mean-value ‘0 0 0 0.003906’” it works better but not perfect.

1. What should I choose value for “channel-mean-value” param in file 0_import_model.sh ?
2. Can you get me link to new doc with convertation process? Current docs are depricated.

Now i use this config:

#!/bin/bash

NAME=yolov4
ACUITY_PATH=../bin/

pegasus=${ACUITY_PATH}pegasus
if [ ! -e "$pegasus" ]; then
    pegasus=${ACUITY_PATH}pegasus.py
fi

#Tensorflow
#$pegasus import tensorflow  \
#               --model ./model/mobilenet_v1.pb \
#               --inputs input \
#               --outputs MobilenetV1/Predictions/Reshape_1 \
#               --input-size-list '224,224,3' \
#               --output-data ${NAME}.data \
#               --output-model ${NAME}.json

#generate inpumeta  --source-file dataset.txt
$pegasus generate inputmeta \
        --model ${NAME}.json \
        --input-meta-output ${NAME}_inputmeta.yml \
        --channel-mean-value "0 0 0 0.003906" \
        --source-file dataset.txt \

#Darknet
$pegasus import darknet\
    --model  /home/brain/Downloads/DNN-Object-Detection-YOLOv3/data/${NAME}.cfg \
    --weights /home/brain/Downloads/DNN-Object-Detection-YOLOv3/data/${NAME}.weights \
    --output-model ${NAME}.json \
    --output-data ${NAME}.data 

But model results are bad ((((

My yolov4 cfg link
My yolov4 names link
My yolov4 weigths link

P.S.
VIM 3 PRO
Ubuntu

this is the correct parameter

I found out strange thing in yolov4_process.c

void yolov4_preprocess(input_image_t imageData, uint8_t *ptr)
{
    int nn_width, nn_height, channels, tmpdata;
    int offset=0, i=0, j=0;
    uint8_t *src = (uint8_t *)imageData.data;

    model_getsize(&nn_width, &nn_height, &channels);
    memset(ptr, 0, nn_width * nn_height * channels * sizeof(uint8_t));

...

    for (i = 0; i < channels; i++) {
        offset = nn_width * nn_height *( channels -1 - i);  // prapare BGR input data
        for (j = 0; j < nn_width * nn_height; j++) {
                tmpdata = (src[j * channels + i]>>1);
					ptr[j + offset] = (uint8_t)((tmpdata >  127) ? 127 : (tmpdata < -128) ? -128 : tmpdata);
        }
    }
    return;
}

Why do we divide tmpdata by 2 (src[j * channels + i]>>1) ?

I’m still getting different model prediction results on different devices. This is very bad, so I’m trying to understand your code.

@Ribamuka The original data is 0~256, but the input data of the model is -128~127