hello. I want to transfer the yolov3 model. I find the nb file size is different compared with original yolov3 model.
But the weight file’s size are same. (class number is same, 80) and I use same .cfg file and same scripts to quantize. Why my converted model is bigger, about 2Mb.
Is there any issue with the model during inference ?
Thanks for your fast reply.
Yes I met some issue in model inference.
Actually, my custom data only have 2 class. I trained model with GitHub - pjreddie/darknet: Convolutional Neural Networks
After training, I quantize model and generate code successfully. I replace the corresponding .h and .c to build .so file(my reference is Beginners Guide: darknet-yolov3 or yolov3-tiny Model Training). And I replace the .nb. But When I run detect_demo_x11_usb demo. I got an error.
The error log is
I also download the original cfg and weights from darknet official site. Run the scripts and do the previous process again, it can work. I check a lot and compare the files. Finally I guess maybe the class numbers matter and it cause some issue.
So I change the class_number to 80 and retrained the model and convert model and do model inference. It still not work. The error log is same.
Oh. I find the reason. The nb model size doesn’t matter. The shared library is wrong.
Actually I need to “./UNINSTALL” first, remove the shared library.
Then add the libnn_yolo_v3.so and libnn_detect.so path to LD_LIBRARY_PATH.
rebuild executive, it can work now.