Hi, I created a model trained with YOLOv8n and followed all the instructions provided in the documentation.
I used the recommended versions: torch==1.10.1
and ultralytics==8.0.86
.
I also modified the ultralytics/ultralytics/nn/modules.py
file as mentioned in the guide, and exported the model to ONNX format as instructed.
Then, I converted the model to .nb
format using the ./convert
tool. Here’s an example of the command I used:
./convert --model-name pan
–platform onnx
–model best.onnx
–mean-values ‘0 0 0 0.00392156’
–quantized-dtype asymmetric_affine
–source-files ./data/dataset1/dataset1.txt
–batch-size 1
–iterations 375
–kboard VIM3
–print-level 1
I created a dataset with more than 300 images and tested both int8 and int16 quantization.
When I run the model using the yolov8n-cap.py
example (modified to use 6 classes, as in my custom model), it loads without any issues. However, the detection accuracy is significantly worse compared to the original best.pt
model.
this is a screen shot of netrona.pp of onnx result:
What could I be missing?