Khadas VIM3 custom one-class YOLOv3 inference issue

Hi!

@numbqq, @Frank

I think I need Your help.

I followed the docs about model converting and it was pretty good with original YOLO .cfg and .weights (SDK quantizing => AML NPU App lib compiling => changing .so and .nb files in prebuilt binaries demo => yolov3 picture x11 demo does work well).

But when I try to run own optimized model (yolov3, 1 class, 416x416 input resolution), I get errors running prebuilt demos:

khadas@Khadas: sudo ./INSTALL
khadas@Khadas: ./detect_demo_x11 -m 2 -p 1080p.bmp

W Detect_api:[det_set_log_level:19]Set log level=1
W Detect_api:[det_set_log_level:21]output_format not support Imperfect, default to DET_LOG_TERMINAL
W Detect_api:[det_set_log_level:26]Not exist VSI_NN_LOG_LEVEL, Setenv set_vsi_log_error_level
det_set_log_config Debug
E [model_create:64]CHECK STATUS(-1:A generic error code, used when no other describes the error.)
E Detect_api:[det_set_model:213]Model_create fail, file_path=nn_data, dev_type=1
det_set_model fail. ret=-4

So I have some questions:

  1. Previously you said, that one-class inference isn’t possible - is it still true?
  2. Any thoughts, what should be changed in code to make those inference possible (not changing my model, but inference engine)?

I can’t attach yolov3_process.c, so I write my changes here

...
[line 48] static char *coco_names[] = {"person"};
...
[line 234] int num_class = 1;
...
[line 297] int size[3]={nn_width/32, nn_height/32,6*3};
...
[line 303] int num_class = 1;
...

Additional info:

  • I use SDK not on VIM3, I run docker container (like it’s described in SDK Readme) on host machine
  • I build library in AML_NPU_APP on VIM3 with files generated by SDK
  • My VIM3 image is EMMC-based VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.7-210625.img.xz
  • I use latest SDK and APP versions (afc875ec and 20fd9af9 corresponding)
  • While quantizing the model I use 1920x1080 images - i don’t know, if it is OK (so I get no errors)

Thanks,
Maxim

1 Like

l get the same errors running inference demo.Is there any progress so far?

@lyuzinmaxim The retrained single-object detection yolo seems to be problematic. The problem is mainly that the object cannot be detected, not your creation error.

After you replace your code, did you re-execute sudo ./INSTALL?

I execute sudo ./INSTALL and got the new error :
det_set_log_config Debug
E Detect_api:[check_and_set_function:165]dlopen libnn_yolo_v3.so failed!
E Detect_api:[det_set_model:206]ModelType so open failed or Not support now!!
det_set_model fail. ret=-1
Excuse me, is there a problem with compiling libnn_yolo_v3.so ?I did not generate libnn_yolo_v3.so through local compilation.

@qixiji You can try to compile the yolov3 library in VIM3 then test again.

Thanks for quick response!

Yes, sure I did ./INSTALL- other demos work fine, but not changed yolov3

The retrained single-object detection yolo seems to be problematic.

Is that error I described above normal for “no-predictions” result?

So it means that during quantization and optimization (fine-tuning) procedures important parameters were losed? So I can use more “gentle” quantizing methods like int16 (or fp32 if it is supported) to protect from information lose during forward propagation?

Thanks,
Maxim

@lyuzinmaxim This is a different problem from your question. You reported an error when creating the model. This is generally because your library does not match your nb file.

@Frank @numbqq
I don’t get it, sorry

I compile library file (*.so) with headers and source code, generated with SDK - so they do have (I think) hyperparameters of my model (at least input size, number of layers etc), and than I use *.nb file (generated by SDK corresponding to my model) and that *.so file to run the picture demo.

I get the error (my 1st message here) when inferring my model, not by creating.

And You said:

The problem is mainly that the object cannot be detected, not your creation error.

After that I thought that changing quantization type may can help

So I can use more “gentle” quantizing methods like int16 (or fp32 if it is supported)?

What do You think, is it even possible to run one-class YOLO model?
I think “yes”, because number of classes affects number of conv layers before YOLO head - and doesn’t affect any post/preprocessing methods - just depth of feature map is different

Thanks,
Maxim

@Frank @numbqq

Could you please answer my question above - it’s very important to me, because I’m choosing board for deployment, and delay is critical.

Best regards,
Maxim

Have you tried to convert your own multi-category? Like 10 categories?

I have met users who use yolo to identify sub-categories. But their phenomenon is that no results can be recognized, rather than running an error. I think the running error is caused by a mismatch between the library and the nb file.

Hi!

I reinstalled aml_npu_app, made docker image again, and did all the steps from documentation - yolov3/yolov4 one-class now run perfect.

I think I made a couple of mistakes during that procedure, but I don’t get what exactly was wrong.

I guess it was library *.so and *.nb files inconsistency, because as @Frank said, no-object situation doesn’t cause errors, for example an image with no objects as input gives:

W Detect_api:[det_set_log_level:19]Set log level=1
W Detect_api:[det_set_log_level:21]output_format not support Imperfect, default to DET_LOG_TERMINAL
W Detect_api:[det_set_log_level:26]Not exist VSI_NN_LOG_LEVEL, Setenv set_vsi_log_error_level
det_set_log_config Debug
det_set_model success!!

model.width:416
model.height:416
model.channel:3

Det_set_input START
Det_set_input END
Det_get_result START
Det_get_result END

resultData.detect_num=0
result type is 0

Thanks for your responses!

Best regards,
Maxim

@lyuzinmaxim The single-input yolo is like this, and the result is. All pictures are empty.

This is a bug in my code. No picture is detected, so I didn’t print anything.