Khadas VIM3 custom one-class YOLOv3 inference issue

Hi!

@numbqq, @Frank

I think I need Your help.

I followed the docs about model converting and it was pretty good with original YOLO .cfg and .weights (SDK quantizing => AML NPU App lib compiling => changing .so and .nb files in prebuilt binaries demo => yolov3 picture x11 demo does work well).

But when I try to run own optimized model (yolov3, 1 class, 416x416 input resolution), I get errors running prebuilt demos:

khadas@Khadas: sudo ./INSTALL
khadas@Khadas: ./detect_demo_x11 -m 2 -p 1080p.bmp

W Detect_api:[det_set_log_level:19]Set log level=1
W Detect_api:[det_set_log_level:21]output_format not support Imperfect, default to DET_LOG_TERMINAL
W Detect_api:[det_set_log_level:26]Not exist VSI_NN_LOG_LEVEL, Setenv set_vsi_log_error_level
det_set_log_config Debug
E [model_create:64]CHECK STATUS(-1:A generic error code, used when no other describes the error.)
E Detect_api:[det_set_model:213]Model_create fail, file_path=nn_data, dev_type=1
det_set_model fail. ret=-4

So I have some questions:

  1. Previously you said, that one-class inference isn’t possible - is it still true?
  2. Any thoughts, what should be changed in code to make those inference possible (not changing my model, but inference engine)?

I can’t attach yolov3_process.c, so I write my changes here

...
[line 48] static char *coco_names[] = {"person"};
...
[line 234] int num_class = 1;
...
[line 297] int size[3]={nn_width/32, nn_height/32,6*3};
...
[line 303] int num_class = 1;
...

Additional info:

  • I use SDK not on VIM3, I run docker container (like it’s described in SDK Readme) on host machine
  • I build library in AML_NPU_APP on VIM3 with files generated by SDK
  • My VIM3 image is EMMC-based VIM3_Ubuntu-gnome-focal_Linux-4.9_arm64_EMMC_V1.0.7-210625.img.xz
  • I use latest SDK and APP versions (afc875ec and 20fd9af9 corresponding)
  • While quantizing the model I use 1920x1080 images - i don’t know, if it is OK (so I get no errors)

Thanks,
Maxim

1 Like

l get the same errors running inference demo.Is there any progress so far?

@lyuzinmaxim The retrained single-object detection yolo seems to be problematic. The problem is mainly that the object cannot be detected, not your creation error.

After you replace your code, did you re-execute sudo ./INSTALL?

I execute sudo ./INSTALL and got the new error :
det_set_log_config Debug
E Detect_api:[check_and_set_function:165]dlopen libnn_yolo_v3.so failed!
E Detect_api:[det_set_model:206]ModelType so open failed or Not support now!!
det_set_model fail. ret=-1
Excuse me, is there a problem with compiling libnn_yolo_v3.so ?I did not generate libnn_yolo_v3.so through local compilation.

@qixiji You can try to compile the yolov3 library in VIM3 then test again.

Thanks for quick response!

Yes, sure I did ./INSTALL- other demos work fine, but not changed yolov3

The retrained single-object detection yolo seems to be problematic.

Is that error I described above normal for “no-predictions” result?

So it means that during quantization and optimization (fine-tuning) procedures important parameters were losed? So I can use more “gentle” quantizing methods like int16 (or fp32 if it is supported) to protect from information lose during forward propagation?

Thanks,
Maxim

@lyuzinmaxim This is a different problem from your question. You reported an error when creating the model. This is generally because your library does not match your nb file.

@Frank @numbqq
I don’t get it, sorry

I compile library file (*.so) with headers and source code, generated with SDK - so they do have (I think) hyperparameters of my model (at least input size, number of layers etc), and than I use *.nb file (generated by SDK corresponding to my model) and that *.so file to run the picture demo.

I get the error (my 1st message here) when inferring my model, not by creating.

And You said:

The problem is mainly that the object cannot be detected, not your creation error.

After that I thought that changing quantization type may can help

So I can use more “gentle” quantizing methods like int16 (or fp32 if it is supported)?

What do You think, is it even possible to run one-class YOLO model?
I think “yes”, because number of classes affects number of conv layers before YOLO head - and doesn’t affect any post/preprocessing methods - just depth of feature map is different

Thanks,
Maxim

@Frank @numbqq

Could you please answer my question above - it’s very important to me, because I’m choosing board for deployment, and delay is critical.

Best regards,
Maxim

Have you tried to convert your own multi-category? Like 10 categories?

I have met users who use yolo to identify sub-categories. But their phenomenon is that no results can be recognized, rather than running an error. I think the running error is caused by a mismatch between the library and the nb file.

Hi!

I reinstalled aml_npu_app, made docker image again, and did all the steps from documentation - yolov3/yolov4 one-class now run perfect.

I think I made a couple of mistakes during that procedure, but I don’t get what exactly was wrong.

I guess it was library *.so and *.nb files inconsistency, because as @Frank said, no-object situation doesn’t cause errors, for example an image with no objects as input gives:

W Detect_api:[det_set_log_level:19]Set log level=1
W Detect_api:[det_set_log_level:21]output_format not support Imperfect, default to DET_LOG_TERMINAL
W Detect_api:[det_set_log_level:26]Not exist VSI_NN_LOG_LEVEL, Setenv set_vsi_log_error_level
det_set_log_config Debug
det_set_model success!!

model.width:416
model.height:416
model.channel:3

Det_set_input START
Det_set_input END
Det_get_result START
Det_get_result END

resultData.detect_num=0
result type is 0

Thanks for your responses!

Best regards,
Maxim

@lyuzinmaxim The single-input yolo is like this, and the result is. All pictures are empty.

This is a bug in my code. No picture is detected, so I didn’t print anything.

@Frank
Hi,
I am using Tengine SDK for yolov3 object detection. I need to do a slight modification in the code. I want to hard-code the .tmfile instead of taking it as an argument. Hence, I have done the following.

char* model_file=nullptr;
model_file=“home/learningtime/workspace/projects/obj-det/model/yolov3_u8.tmfile”;

However, in this case I am geeting an error: String cannot be converted into char*.

After finding some solutions on stack overflow. I tried the below method.
std::string str="/home/learningtime/workspace/projects/obj-det/model/yolov3_u8.tmfile";
const char* model_file = str.c_str();

I could build and compile the code without any errors. But when I try to run the executable generated, I am getting an error “Input file does not exist”.
When I traced back the error. I understood that it was not able to open the file in the function below.
int check_file_exist(const char* file_name)
{
FILE* fp = fopen(file_name, “r”);
if (!fp)
{
fprintf(stderr, “Input file does not exist: %s\n”, file_name);
return 0;
}
fclose(fp);
return 1;
}


Can you please tell me how can I read the model file in main() function itself, instead of arguments.

Do you have any solutions? Thanks in advance.

@Akkisony

const char * model_file = "/home/learningtime/workspace/projects/obj-det/model/yolov3_u8.tmfile";
1 Like

@Frank Thank you Frank. :slight_smile:
Can you please let me know how can I measure the parameters of NPU while running the model. For Ex: Power Consumption of the NPU when running the model.

@Akkisony The NPU is integrated in the CPU, so there is no way to test the power consumption separately

@Frank Thanks again!
Can you please suggest me what other parameters can I measure in order to compare NPU with Coral TPU.
Currently, I have inference time and accuracy. I thought power consumption is also an important parameter.
Can you suggest me any other parameters to compare?

@Akkisony I am not familiar with this part of the hardware, nor do I have a good way to measure power consumption. Maybe you can measure the power consumption of a standby VIM3, and then measure the power consumption of running the NPU application to see how much the power consumption of running the NPU has increased. Then use the same method and software to measure coral

@Frank Thanks for your input.

@Frank I just need few clarification.

  1. Which image format does yolov3 take input? Is it RGB or BGR?
  2. When we use the SDK for quantizing the model, which quantization technique is applied?

Thanks for your time.