Yolov4 transcoding

I have trained yolo v4 for my classes and launched it on khades. however, as a result, I do not get anything in the pictures that worked on the host when the network was not transcoded. Tell me how I can fix this.

@ilya71 Please show you convert scrpts there.

0_import_model.sh

#!/bin/bash

NAME=yolov4
ACUITY_PATH=../bin/acuitylib/

convert_caffe=${ACUITY_PATH}convertcaffe
convert_tf=${ACUITY_PATH}convertensorflow
convert_tflite=${ACUITY_PATH}convertflite
convert_darknet=${ACUITY_PATH}convertdarknet
convert_onnx=${ACUITY_PATH}convertonnx


#$convert_tf \
#    --tf-pb ./model/mobilenet_v1.pb \
#    --inputs input \
#    --input-size-list '224,224,3' \
#    --outputs MobilenetV1/Logits/SpatialSqueeze \
#    --net-output ${NAME}.json \
#    --data-output ${NAME}.data 
	
#$convert_caffe \
#    --caffe-model xx.prototxt   \
#	--caffe-blobs xx.caffemodel \
#    --net-output ${NAME}.json \
#    --data-output ${NAME}.data 
	
#$convert_tflite \
#    --tflite-mode  xxxx.tflite \
#    --net-output ${NAME}.json \
#    --data-output ${NAME}.data 

$convert_darknet \
--net-input yolo-obj.cfg \
--weight-input yolo-obj_2000.weights \
--net-output ${NAME}.json \
--data-output ${NAME}.data 
	
#$convert_onnx \
#    --onnx-model  xxx.onnx \
#    --net-output ${NAME}.json \
#    --data-output ${NAME}.data 

1_quantize_model.sh (here I tried to increase the number of epochs, but it did not help. And I had to add patches because in validation.txt I have about 800 images that do not fit in memory)
#!/bin/bash

NAME=yolov4
ACUITY_PATH=../bin/acuitylib/

tensorzone=${ACUITY_PATH}tensorzonex

#dynamic_fixed_point-i8 asymmetric_affine-u8
$tensorzone \
    --action quantization \
    --epochs 10 \
    --dtype float32 \
    --source text \
    --source-file ./data/validation_tf.txt \
    --batch-size 16 \
    --channel-mean-value '0 0 0 256' \
    --model-input ${NAME}.json \
    --model-data ${NAME}.data \
    --model-quantize ${NAME}.quantize \
    --quantized-dtype dynamic_fixed_point-i8 \
    --quantized-rebuild

2_export_case_code.sh
#!/bin/bash

NAME=yolov4
ACUITY_PATH=../bin/acuitylib/

export_ovxlib=${ACUITY_PATH}ovxgenerator

$export_ovxlib \
    --model-input ${NAME}.json \
    --data-input ${NAME}.data \
    --reorder-channel '2 1 0' \
    --channel-mean-value '0 0 0 256' \
    --export-dtype quantized \
    --model-quantize ${NAME}.quantize \
    --optimize VIPNANOQI_PID0X88  \
    --viv-sdk ${ACUITY_PATH}vcmdtools \
    --pack-nbg-unify 

rm  *.h *.c .project .cproject *.vcxproj *.lib BUILD *.linux *.export.data 

rm -rf *.h *.c .project .cproject *.vcxproj *.lib BUILD *.linux

rm -rf nbg_unify_${NAME}

mv ../*_nbg_unify nbg_unify_${NAME}

cd nbg_unify_${NAME}

mv network_binary.nb ${NAME}.nb

@ilya71 Where did you get this yolo cfg and weight ? Did you know the image size of it ?

Maybe you can try my cfg and weights file

1 Like

Thank you, I checked with your model and everything works well, but then I don’t understand what is wrong with my model. I even lowered the threshold to 0.1 in the yolov4_process.c file. I expected to see a lot of unnecessary triggers, but there was still not a single frame. I took the AlexeyAB file for my model. Do I need to retrain with your configuration file or what could be the problem?

@ilya71 I am not sure. My model size is 416x416. Did you ? maybe you can check the different in cfgs. If you need, you can use my cfg to train you model

2 Likes

I used your configuration file and trained the network. She learned much worse, but when running on Khadas, she now gives at least something. Thank you very much, now I just need to train her properly