Converted yolov3 files/model but not working

I used the following scripts to perform the conversion as described on the khadas documentation page (convert and call your own model through NPU | Khadas Documentation). After conversion, When I copy the mentioned files into the relevant folders, and recompile the library and application, it does not work. Can you please check where the problem is?

0_import_model.sh

#!/bin/bash

NAME=yolov3
ACUITY_PATH=../bin/

convert_caffe=${ACUITY_PATH}convertcaffe
convert_tf=${ACUITY_PATH}convertensorflow
convert_tflite=${ACUITY_PATH}convertflit
convert_darknet=${ACUITY_PATH}convertdarknet
convert_onnx=${ACUITY_PATH}convertonnx
convert_keras=${ACUITY_PATH}convertkeras
convert_pytorch=${ACUITY_PATH}convertpytorch

$convert_darknet \
    --net-input /home/sajjad/sajjad/darknet/cfg/yolov3.cfg \
    --weight-input /home/sajjad/sajjad/darknet/yolov3.weights \
    --net-output ${NAME}.json \
    --data-output ${NAME}.data

1_quantize_model.sh

#!/bin/bash

NAME=yolov3
ACUITY_PATH=../bin/

tensorzone=${ACUITY_PATH}tensorzonex

$tensorzone \
    --action quantization \
    --dtype float32 \
    --source text \
    --source-file data/validation_tf.txt \
    --channel-mean-value '0 0 0 256' \
    --model-input ${NAME}.json \
    --model-data ${NAME}.data \
    --quantized-dtype dynamic_fixed_point-i8 \
    --quantized-rebuild \
#    --batch-size 2 \
#    --epochs 5

2_export_case_code.sh

#!/bin/bash

NAME=yolov3
ACUITY_PATH=../bin/

export_ovxlib=${ACUITY_PATH}ovxgenerator

$export_ovxlib \
    --model-input ${NAME}.json \
    --data-input ${NAME}.data \
    --model-quantize ${NAME}.quantize \
    --reorder-channel '2 1 0' \
    --channel-mean-value '0 0 0 256' \
    --export-dtype quantized \
    --optimize VIPNANOQI_PID0X88  \
    --viv-sdk ${ACUITY_PATH}vcmdtools \
    --pack-nbg-unify  \

#Note:
#        --optimize VIPNANOQI_PID0XB9
#       when exporting nbg case for different platforms, the paramsters are different.
#   you can set VIPNANOQI_PID0X7D       VIPNANOQI_PID0X88       VIPNANOQI_PID0X99
#                               VIPNANOQI_PID0XA1       VIPNANOQI_PID0XB9       VIPNANOQI_PID0XBE       VIPNANOQI_PID0XE8
#       Refer to sectoin 3.4(Step 3) of the  <Model_Transcoding and Running User Guide_V0.8> documdent


rm -rf nbg_unify_${NAME}

mv ../*_nbg_unify nbg_unify_${NAME}

cd nbg_unify_${NAME}

mv network_binary.nb ${NAME}_88.nb

cd ..

#save normal case demo export.data
mkdir -p ${NAME}_normal_case_demo
mv  *.h *.c .project .cproject *.vcxproj BUILD *.linux *.export.data ${NAME}_normal_case_demo

The error is:

khadas@Khadas-teco:~/hussain/aml_npu_demo_binaries/detect_demo_picture$ ./detect_demo_x11 -m 2 -p ./1080p.bmp
W Detect_api:[det_set_log_level:19]Set log level=1
W Detect_api:[det_set_log_level:21]output_format not support Imperfect, default to DET_LOG_TERMINAL
W Detect_api:[det_set_log_level:26]Not exist VSI_NN_LOG_LEVEL, Setenv set_vsi_log_error_level
det_set_log_config Debug
E [model_create:64]CHECK STATUS(-1:A generic error code, used when no other describes the error.)
E Detect_api:[det_set_model:225]Model_create fail, file_path=nn_data, dev_type=1
det_set_model fail. ret=-4

Please guide! Thanks in advance!

@Frank @numbqq

@enggsajjad Are the cfg and weights of your yolov3 official from darknet?

Hi, Thanks for the response. I followed the steps in YOLO: Real-Time Object Detection. How can I check that the cfg and weights are from official darknet? @Frank @numbqq

@enggsajjad Models from the official should be available. Did you use the cfg file of the 80 categories?

Hi @Frank, sorry, I am new to darknet/yolov3, can you please explain what is meant by:

Did you use the cfg file of the 80 categories?
I just followed the steps mentioned in the tutorial, and verified many a times, but still could not seccessfully run the yolov3. When I replace the yolov3_88.nb from original repository, it works again. Please guide!!
Regards,

@enggsajjad Maybe you can try with those.

https://pjreddie.com/media/files/yolov3.weights

So you mean, I should have to change yolov3.cfg and keep the yolov3.weights same? @Frank

I set the weights and cfg as you mentioned, regenerated the .nb file, copied it to the khadas board. Still it gives the following error:

khadas@Khadas-teco:~/hussain/aml_npu_appwog/aml_npu_app/detect_library/sample_demo_x11B/bin_r_cv4$ ./detect_demo_x11 -m 2 -p ../1080p.bmp       
det_set_log_config Debug
E [model_create:64]CHECK STATUS(-1:A generic error code, used when no other describes the error.)
det_set_model fail. ret=-1

@Frank

@enggsajjad After you replay the nb file and the library. Did you rerun sudo ./INSTALL ?

Yes, I did it. But the error still persists. Any solutions? I don’t know where I am missing something. Thanks for the help!

One Question! shold I also had to add: –reorder-channel ‘2 1 0’ in the 1_quantize_model.sh
I tried with and without this option. The result is the same, i.e. no result.

@Frank

@enggsajjad You can’t see where the problem is from the log. You may need to add print information to confirm the problem. Most of the previous problems were caused by not using the official cfg and weights files. Also, are you sure your system, libraries, and conversion tools are up to the least version??

I don’t know how to check all these things. I tried by cloning new sdk and upgrading the khadas system, but still the error is same. Donot know how to solve it. @Frank

Hi @Frank
I have added some printf statements in the following function in yolo_v3.c and could not print:

printf("\nmodel_create5\n"); //Sajjad

I guess, therefore, there is some issue with the function calling of status = vsi_nn_VerifyGraph(g_graph); . do you have any idea how to resolve it?

det_status_t model_create(const char * data_file_path, dev_type type)
{
	det_status_t ret = DET_STATUS_ERROR;
	vsi_status status = VSI_SUCCESS;
	char model_path[48];
	printf("\nmodel_create1\n"); //Sajjad
	switch (type) {
		case DEV_REVA:
			sprintf(model_path, "%s%s", data_file_path, "/yolov3_7d.nb");
			printf("\nmodel_create1.1\n"); //Sajjad
			break;
		case DEV_REVB:
			sprintf(model_path, "%s%s", data_file_path, "/yolov3_88.nb");
			printf("\nmodel_create1.2\n"); //Sajjad
			break;
		case DEV_MS1:
			sprintf(model_path, "%s%s", data_file_path, "/yolov3_99.nb");
			printf("\nmodel_create1.3\n"); //Sajjad
			break;
		default:
			break;
	}
	printf("\nmodel_create2\n"); //Sajjad

	g_graph = vnn_CreateYolov3(model_path, NULL,
			vnn_GetPrePorcessMap(), vnn_GetPrePorcessMapCount(),
			vnn_GetPostPorcessMap(), vnn_GetPostPorcessMapCount());
	TEST_CHECK_PTR(g_graph, exit);
	printf("\nmodel_create3\n"); //Sajjad

	status = vsi_nn_VerifyGraph(g_graph);
	printf("\nmodel_create4\n"); //Sajjad

	TEST_CHECK_STATUS(status, exit);
	printf("\nmodel_create5\n"); //Sajjad

	ret = DET_STATUS_OK;
exit:
	return ret;
}

Regards

@enggsajjad If the verification fails, it means that your so file and your nb file do not match

you mean libnn_yolo_v3.so?? After generation I copy the *.c and *.h files (as mentioned in the documentation page) into detect_yolo_v3 generate the .so file and then copy this .so and .nb file into detect_demo_picture lib and nn_data folder. I don’t know what is wrong here.

@enggsajjad If you are using the official cfg and weights, then the files you converted should be the same as those in the demo repository. Have you compared the c file, is there a difference?

@Frank I generated the difference for both the cfg and weight (you suggested and the one on the darkent website YOLO: Real-Time Object Detection )
Using your suggested cfg and weights

khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/vnn_yolov3.c hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/vnn_yolov3.c
2,3c2,3
< *   Generated by ACUITY 6.0.12
< *   Match ovxlib 1.1.34
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
31c31
<         memset( _attr.size, 0, VSI_NN_MAX_DIM_NUM * sizeof(vsi_size_t));\
---
>         memset( _attr.size, 0, VSI_NN_MAX_DIM_NUM * sizeof(uint32_t));\
151,152d150
<     vsi_bool                inference_with_nbg = FALSE;
<     char*                   pos = NULL;
163d160
<     memset( &node, 0, sizeof( vsi_nn_node_t * ) * NET_NODE_NUM );
172,177d168
<     pos = strstr(data_file_name, ".nb");
<     if( pos && strcmp(pos, ".nb") == 0 )
<     {
<         inference_with_nbg = TRUE;
<     }
<
211,212d201
<     if( !inference_with_nbg )
<     {
228,235d216
<     }
<     else
<     {
<     NEW_VXNODE(node[0], VSI_NN_OP_NBG, 1, 3, 0);
<     node[0]->nn_param.nbg.type = VSI_NN_NBG_FILE;
<     node[0]->nn_param.nbg.url = data_file_name;
<
<     }
283,284d263
<     if( !inference_with_nbg )
<     {
300,308d278
<     }
<     else
<     {
<     node[0]->input.tensors[0] = norm_tensor[0];
<     node[0]->output.tensors[0] = norm_tensor[1];
<     node[0]->output.tensors[1] = norm_tensor[2];
<     node[0]->output.tensors[2] = norm_tensor[3];
<
<     }
khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_pre_process.h hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/include/vnn_pre_process.h
2,3c2,3
< *   Generated by ACUITY 5.11.0
< *   Match ovxlib 1.1.21
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
78c78
< const vsi_nn_preprocess_map_element_t * vnn_GetPrePorcessMap();
---
> const vsi_nn_preprocess_map_element_t * vnn_GetPreProcessMap();
80c80
< uint32_t vnn_GetPrePorcessMapCount();
---
> uint32_t vnn_GetPreProcessMapCount();
84a85
>
khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_post_process.h hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/include/vnn_post_process.h
2,3c2,3
< *   Generated by ACUITY 5.11.0
< *   Match ovxlib 1.1.21
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
16c16
< const vsi_nn_postprocess_map_element_t * vnn_GetPostPorcessMap();
---
> const vsi_nn_postprocess_map_element_t * vnn_GetPostProcessMap();
18c18
< uint32_t vnn_GetPostPorcessMapCount();
---
> uint32_t vnn_GetPostProcessMapCount();
khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_yolov3.h hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/include/vnn_yolov3.h
2,3c2,3
< *   Generated by ACUITY 6.0.12
< *   Match ovxlib 1.1.34
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
20c20
< #define VNN_VERSION_PATCH 34
---
> #define VNN_VERSION_PATCH 30
khadas@Khadas-teco:~$

Using cfg and weights from darknet website

khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/vnn_yolov3.c hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/vnn_yolov3.c
2,3c2,3
< *   Generated by ACUITY 6.0.12
< *   Match ovxlib 1.1.34
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
31c31
<         memset( _attr.size, 0, VSI_NN_MAX_DIM_NUM * sizeof(vsi_size_t));\
---
>         memset( _attr.size, 0, VSI_NN_MAX_DIM_NUM * sizeof(uint32_t));\
151,152d150
<     vsi_bool                inference_with_nbg = FALSE;
<     char*                   pos = NULL;
163d160
<     memset( &node, 0, sizeof( vsi_nn_node_t * ) * NET_NODE_NUM );
172,177d168
<     pos = strstr(data_file_name, ".nb");
<     if( pos && strcmp(pos, ".nb") == 0 )
<     {
<         inference_with_nbg = TRUE;
<     }
<
211,212d201
<     if( !inference_with_nbg )
<     {
219,222c208,211
<       input     - [416, 416, 3, 1]
<       output    - [13, 13, 255, 1]
<                   [26, 26, 255, 1]
<                   [52, 52, 255, 1]
---
>       input     - [608, 608, 3, 1]
>       output    - [19, 19, 255, 1]
>                   [38, 38, 255, 1]
>                   [76, 76, 255, 1]
228,235d216
<     }
<     else
<     {
<     NEW_VXNODE(node[0], VSI_NN_OP_NBG, 1, 3, 0);
<     node[0]->nn_param.nbg.type = VSI_NN_NBG_FILE;
<     node[0]->nn_param.nbg.url = data_file_name;
<
<     }
242,243c223,224
<     attr.size[0] = 416;
<     attr.size[1] = 416;
---
>     attr.size[0] = 608;
>     attr.size[1] = 608;
252,253c233,234
<     attr.size[0] = 13;
<     attr.size[1] = 13;
---
>     attr.size[0] = 19;
>     attr.size[1] = 19;
262,263c243,244
<     attr.size[0] = 26;
<     attr.size[1] = 26;
---
>     attr.size[0] = 38;
>     attr.size[1] = 38;
272,273c253,254
<     attr.size[0] = 52;
<     attr.size[1] = 52;
---
>     attr.size[0] = 76;
>     attr.size[1] = 76;
283,284d263
<     if( !inference_with_nbg )
<     {
300,308d278
<     }
<     else
<     {
<     node[0]->input.tensors[0] = norm_tensor[0];
<     node[0]->output.tensors[0] = norm_tensor[1];
<     node[0]->output.tensors[1] = norm_tensor[2];
<     node[0]->output.tensors[2] = norm_tensor[3];
<
<     }
khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_pre_process.h hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/include/vnn_pre_process.h
2,3c2,3
< *   Generated by ACUITY 5.11.0
< *   Match ovxlib 1.1.21
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
78c78
< const vsi_nn_preprocess_map_element_t * vnn_GetPrePorcessMap();
---
> const vsi_nn_preprocess_map_element_t * vnn_GetPreProcessMap();
80c80
< uint32_t vnn_GetPrePorcessMapCount();
---
> uint32_t vnn_GetPreProcessMapCount();
84a85
>
khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_post_process.h hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/include/vnn_post_process.h
2,3c2,3
< *   Generated by ACUITY 5.11.0
< *   Match ovxlib 1.1.21
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
16c16
< const vsi_nn_postprocess_map_element_t * vnn_GetPostPorcessMap();
---
> const vsi_nn_postprocess_map_element_t * vnn_GetPostProcessMap();
18c18
< uint32_t vnn_GetPostPorcessMapCount();
---
> uint32_t vnn_GetPostProcessMapCount();
khadas@Khadas-teco:~$ diff hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_yolov3.h hussain/aml_npu_appwog/aml_npu_app/detect_library/model_code/detect_yolo_v3A/include/vnn_yolov3.h
2,3c2,3
< *   Generated by ACUITY 6.0.12
< *   Match ovxlib 1.1.34
---
> *   Generated by ACUITY 5.21.1_0702
> *   Match ovxlib 1.1.30
20c20
< #define VNN_VERSION_PATCH 34
---
> #define VNN_VERSION_PATCH 30
khadas@Khadas-teco:~$

@enggsajjad I will check it tomorrow. Provide you with a detailed set of steps.

I also tried witht the recent sdk version and my scripts are as follows:
0_import_model_416.sh

#!/bin/bash

NAME=yolov3
ACUITY_PATH=../bin/
pegasus=${ACUITY_PATH}pegasus
if [ ! -e "$pegasus" ]; then
    pegasus=${ACUITY_PATH}pegasus.py
fi

#Darknet
$pegasus import darknet\
    --model  /home/sajjad/sajjad/models-zoo/darknet/yolov3/yolov3/yolov3.cfg \
    --weights  /home/sajjad/sajjad/yolov3.weights \
    --output-model ${NAME}.json \
    --output-data ${NAME}.data \

#generate inpumeta  --source-file dataset.txt
$pegasus generate inputmeta \
        --model ${NAME}.json \
        --input-meta-output ${NAME}_inputmeta.yml \
        --channel-mean-value "0 0 0 256"  \
        --source-file data/validation_tf_416.txt
#       --source-file dataset.txt

1_quantize_model_416.sh

#!/bin/bash

NAME=yolov3
ACUITY_PATH=../bin/

pegasus=${ACUITY_PATH}pegasus
if [ ! -e "$pegasus" ]; then
    pegasus=${ACUITY_PATH}pegasus.py
fi

$pegasus  quantize \
        --quantizer dynamic_fixed_point \
        --qtype int8 \
        --rebuild \
        --with-input-meta  ${NAME}_inputmeta.yml \
        --model  ${NAME}.json \
        --model-data  ${NAME}.data

2_export_case_code_416.sh

#!/bin/bash

NAME=yolov3
ACUITY_PATH=../bin/

pegasus=$ACUITY_PATH/pegasus
if [ ! -e "$pegasus" ]; then
    pegasus=$ACUITY_PATH/pegasus.py
fi

$pegasus export ovxlib\
    --model ${NAME}.json \
    --model-data ${NAME}.data \
    --model-quantize ${NAME}.quantize \
    --with-input-meta ${NAME}_inputmeta.yml \
    --dtype quantized \
    --optimize VIPNANOQI_PID0X88  \
    --viv-sdk ${ACUITY_PATH}vcmdtools \
    --pack-nbg-unify

rm -rf ${NAME}_nbg_unify

mv ../*_nbg_unify ${NAME}_nbg_unify

cd ${NAME}_nbg_unify

mv network_binary.nb ${NAME}_88.nb

cd ..

#save normal case demo export.data
mkdir -p ${NAME}_normal_case_demo
mv  *.h *.c .project .cproject *.vcxproj BUILD *.linux *.export.data ${NAME}_normal_case_demo

# delete normal_case demo source
#rm  *.h *.c .project .cproject *.vcxproj  BUILD *.linux *.export.data

#rm *.data *.quantize *.json *_inputmeta.yml
rm *.data *.json *_inputmeta.yml

validation_tf_416.txt

cat data/validation_tf_416.txt
./1080p-416x416.jpg

When used these scripts and generated the .so and .nb files, it worked but giving wrong results as attached.
steps:

detect_demo_picture$ sudo ./UNINSTALL
detect_demo_picture$ sudo ./INSTALL
detect_demo_picture$ ./detect_demo_x11 -m 2 -p ./1080p.bmp

@Frank Thanks for the responses.

I installed a new Khadas OS on another SD Card and then cloned fresh aml_npu_app and aml_npu_binaries. Then copied .so and .nb file. Still the yolov3 can not be run seccessfully. @Frank @numbqq @osos55 sos55