Which system do you use? Android, Ubuntu, OOWOW or others?
Ubuntu
Which version of system do you use? Please provide the version of the system here:
VIM4 with NPU kernel 5.15 241129
Please describe your issue below:
Running model convert in PC fails. System with WSL and Docker Desktop running with “numbqq/npu-vim4” container.
When using vim4_npu_sdk and running command “./convert-in-docker.sh ksnn” for python model cannot convert onnx model with multiple outputs.
Model converts fine if I specify only output0. I can share onnx file here also.
Post a console log of your issue below:
usage: convert [-h] [--model-name MODEL_NAME] [--print-level PRINT_LEVEL] [--model-type MODEL_TYPE] [--model MODEL] [--weights WEIGHTS] [--inputs INPUTS]
[--input-shapes INPUT_SHAPES] [--shape-with-batch SHAPE_WITH_BATCH] [--dtypes DTYPES] [--outputs OUTPUTS] [--batch-size BATCH_SIZE]
[--iterations ITERATIONS] [--outdir OUTDIR] [--quantize-dtype QUANTIZE_DTYPE] [--source-file SOURCE_FILE]
[--channel-mean-value CHANNEL_MEAN_VALUE] [--disable-per-channel DISABLE_PER_CHANNEL] [--inference-input-type INFERENCE_INPUT_TYPE]
[--inference-output-type INFERENCE_OUTPUT_TYPE] [--kboard KBOARD] [--quantize-algo QUANTIZE_ALGO] [--optimize-algo OPTIMIZE_ALGO]
[--kl-threshold-size KL_THRESHOLD_SIZE] [--divergence-nbins DIVERGENCE_NBINS] [--percentile PERCENTILE]
[--asymm-percentile ASYMM_PERCENTILE] [--moving-alpha MOVING_ALPHA] [--neg-scale NEG_SCALE] [--weights-quantize-algo WEIGHTS_QUANTIZE_ALGO]
[--weights-optimize-algo WEIGHTS_OPTIMIZE_ALGO] [--weights-threshold-size WEIGHTS_THRESHOLD_SIZE] [--weights-neg-scale WEIGHTS_NEG_SCALE]
[--export-template-code EXPORT_TEMPLATE_CODE] [--system-env SYSTEM_ENV] [--debugger-option DEBUGGER_OPTION]
convert: error: unrecognized arguments: output1 output2
Following is ksnn_args.txt file used
–model-name modified_yolov8n-base-version
–model-type onnx
–model modified_yolov8n-base-version.onnx
–inputs “images”
–input-shapes “3,640,640”
–dtypes “float32”
–outputs “output0 output1 output2”
–quantize-dtype int8
–outdir onnx_output
–channel-mean-value “0,0,0,255”
–source-file dataset.txt
–iterations 1
–batch-size 1
–kboard VIM4
–inference-input-type “float32”
–inference-output-type “float32”