I have a trained yolov3 (darknet) model that I am trying to convert by leveraging the npu SDK using both the python convert script and the c demo tool. In the past, I have been able to convert resnet18 model by converting a pretrained resnet18 PyTorch model to onnx and then converting the resulting onnx model to network_binary (.nb) files using the convert script as suggested in other posts on the khadas forum, followed by its successful deployment on khadas. Now, to convert the yolov3 model I follow two paths:
- In the first method I use the convert script with
--platform darknet
, essentially converting the darknet yolov3 model by using the command:
./convert --model-name yolov3 --platform darknet --model dota-yolov3-416.cfg --weights dota-yolov3-416_final.weights --inputs 000_net --input-size-list '3,1024,1024' --mean-values '0,0,0,256' --quantized-dtype asymmetric_affine --kboard VIM3 --print-level 1
The verbose output of the script can be found in this text file.
All the steps of the script are executed successfully except the library generation part indicated by the script ending with the following message (please refer to the entire verbose output in the link above):
Start export model ...
Done.Export model success !!!
Start generate library...
The expected result of this process is a nb file in the ‘outputs/model_name’ directory, however, I instead get a text file titled ‘pathfile’ in the outputs directory with the path ‘/tmp/MEIvdUVs’ which doesn’t exist.
- In the second method, I go the following route: yolov3 → onnx → .nb file. I convert the darknet yolov3 model using this python script. The resulting onnx model is then converted using the following command:
./convert --model-name yolov3 --platform onnx --model yolov3-1024,1024.onnx --inputs 000_net --input-size-list '3,1024,1024' --mean-values '103.94,116.78,123.68,58.82' --quantized-dtype asymmetric_affine --kboard VIM3 --print-level 1
The verbose output of the script can be found in this text file.
I was expecting this method to work out since this route had worked in the past for resnet18 model conversion as I mentioned before. However, I get the same issue as in method 1, i.e. no files are generated in the ‘output/model_name’ directory and I instead get a pathfile in the outputs directory.
Below are links to the yolov3 config file, network weights and the python script I am using to convert yolov3 to onnx.
cfg-file
weights
converted onnx model
Following is a list of relevant package versions I am using:
python==3.7
onnx==1.6.0
Can I get some insight why the model conversion script is not working as expected?