VIM3转换 yolov3模型时出错。

自训练的 yolov3模型,在 pc 的 darknet 、nvidia 的 deepstreamer 、opencv4.4 dnn 加载正常。

ubuntu1604

执行此步操作时出错,未生成 nb 文件

bash 0_import_model.sh && bash 1_quantize_model.sh  && bash 2_export_case_code.sh

#!/bin/bash

NAME=yolov3
ACUITY_PATH=../bin/

export_ovxlib=${ACUITY_PATH}ovxgenerator

$export_ovxlib \
    --model-input ${NAME}.json \
    --data-input ${NAME}.data \
    --model-quantize ${NAME}.quantize \
    --reorder-channel '2 1 0' \
    --channel-mean-value '0 0 0 256' \
    --export-dtype quantized \
    --optimize VIPNANOQI_PID0X88  \
    --viv-sdk ${ACUITY_PATH}vcmdtools \
    --pack-nbg-unify
D Quantize @convolution_99_232:weight to dynamic_fixed_point.
D Packing convolution_9_24 ...
D Quantize @convolution_9_24:bias to dynamic_fixed_point.
D Quantize @convolution_9_24:weight to dynamic_fixed_point.
I Saving data to yolov3.export.data
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_yolov3.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_yolov3.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_post_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_post_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_pre_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_pre_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_global.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/main.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/BUILD
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/yolov3.vcxproj
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/makefile.linux
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/.cproject
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/.project
D Generate fake input /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/input_0_0.tensor
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/*.nb': No such file or directory
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/*.dat': No such file or directory
I Dump nbg input meta to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/nbg_meta.json
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_yolov3.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_yolov3.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_post_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_post_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_pre_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_pre_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_global.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/main.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/BUILD
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/yolov3.vcxproj
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/makefile.linux
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/.cproject
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/.project
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/network_binary.nb': No such file or directory
/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify
customer:input,0,0:output,0,1:output,1,2:output,2,3:
*********************************
/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo
/
/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify

@Tony-Wang

  1. 请将3个脚本的内容都贴出来
  2. 请将3个脚本运行过程的log都贴出来
  3. 从你提供的log来看并未报错,你用ls命令看下,生成的文化夹都有哪些文件?
$convert_darknet \
    --net-input ~/MyProject/chaofeng-iot-admin-backend/dist/yolov3.cfg \
    --weight-input ~/MyProject/chaofeng.backup \
    --net-output ${NAME}.json \
    --data-output ${NAME}.data

$tensorzone \
    --action quantization \
    --dtype float32 \
    --source text \
    --source-file data/validation_tf.txt \
    --channel-mean-value '0 0 0 256' \
    --reorder-channel '0 1 2' \
    --model-input ${NAME}.json \
    --model-data ${NAME}.data \
    --quantized-dtype dynamic_fixed_point-i8 \
    --quantized-rebuild \

$export_ovxlib \
    --model-input ${NAME}.json \
    --data-input ${NAME}.data \
    --model-quantize ${NAME}.quantize \
    --reorder-channel '2 1 0' \
    --channel-mean-value '0 0 0 256' \
    --export-dtype quantized \
    --optimize VIPNANOQI_PID0X88  \
    --viv-sdk ${ACUITY_PATH}vcmdtools \
    --pack-nbg-unify

bash 0_import_model.sh && bash 1_quantize_model.sh执行是正常的。

2_export_case_code.sh 执行时出现问题。

D Process convolution_102_241 ...
D Acuity output shape(convolution): (1 52 52 256)
D Tensor @convolution_102_241:out0 type: dynamic_fixed_point
D Process leakyrelu_102_243 ...
D Acuity output shape(leakyrelu): (1 52 52 256)
D Tensor @leakyrelu_102_243:out0 type: dynamic_fixed_point
D Process convolution_103_244 ...
D Acuity output shape(convolution): (1 52 52 128)
D Tensor @convolution_103_244:out0 type: dynamic_fixed_point
D Process leakyrelu_103_246 ...
D Acuity output shape(leakyrelu): (1 52 52 128)
D Tensor @leakyrelu_103_246:out0 type: dynamic_fixed_point
D Process convolution_104_247 ...
D Acuity output shape(convolution): (1 52 52 256)
D Tensor @convolution_104_247:out0 type: dynamic_fixed_point
D Process leakyrelu_104_249 ...
D Acuity output shape(leakyrelu): (1 52 52 256)
D Tensor @leakyrelu_104_249:out0 type: dynamic_fixed_point
D Process convolution_105_250 ...
D Acuity output shape(convolution): (1 52 52 33)
D Tensor @convolution_105_250:out0 type: dynamic_fixed_point
D Process output_106_251_acuity_mark_perm_255 ...
D Acuity output shape(permute): (1 33 52 52)
D Tensor @output_106_251_acuity_mark_perm_255:out0 type: dynamic_fixed_point
D Process output_106_251 ...
D Acuity output shape(output): (1 33 52 52)
D Tensor @output_106_251:out0 type: dynamic_fixed_point
I Build yolov3 complete.
I Initialzing network optimizer by /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/../bin/VIPNANOQI_PID0X88 ...
D Optimizing network with merge_ximum, qnt_adjust_coef, multiply_transform, add_extra_io, format_input_ops, auto_fill_zero_bias, conv_kernel_transform, strip_op, extend_unstack_split, merge_layer, transform_layer, broadcast_op, strip_op, auto_fill_reshape_zero, adjust_output_attrs, insert_dtype_converter
I Start T2C Switcher...
D Optimizing network with broadcast_op, t2c_fc
D convert concat_86_205(concat) axis 3 to 1
D convert concat_98_231(concat) axis 3 to 1
D insert permute output_82_199_acuity_mark_perm_253_acuity_mark_perm_2 before output_82_199_acuity_mark_perm_253
D insert permute output_94_225_acuity_mark_perm_254_acuity_mark_perm_5 before output_94_225_acuity_mark_perm_254
D insert permute output_106_251_acuity_mark_perm_255_acuity_mark_perm_8 before output_106_251_acuity_mark_perm_255
D insert permute convolution_0_1_acuity_mark_perm_11 before convolution_0_1
D remove permute convolution_0_1_acuity_mark_perm_252
D remove permute convolution_0_1_acuity_mark_perm_11
D remove permute output_82_199_acuity_mark_perm_253_acuity_mark_perm_2
D remove permute output_82_199_acuity_mark_perm_253
D remove permute output_94_225_acuity_mark_perm_254_acuity_mark_perm_5
D remove permute output_94_225_acuity_mark_perm_254
D remove permute output_106_251_acuity_mark_perm_255_acuity_mark_perm_8
D remove permute output_106_251_acuity_mark_perm_255
I End T2C Switcher...
D Process input_0 ...
D Acuity output shape(input): (1 3 412 412)
D Tensor @input_0:out0 type: dynamic_fixed_point
D Process convolution_0_1 ...
D Acuity output shape(convolution): (1 32 412 412)
D Tensor @convolution_0_1:out0 type: dynamic_fixed_point
D Process leakyrelu_0_3 ...
......
D Tensor @leakyrelu_104_249:out0 type: dynamic_fixed_point
D Process convolution_105_250 ...
D Acuity output shape(convolution): (1 33 52 52)
D Tensor @convolution_105_250:out0 type: dynamic_fixed_point
D Process output_106_251 ...
D Acuity output shape(output): (1 33 52 52)
D Tensor @output_106_251:out0 type: dynamic_fixed_point
I Build yolov3 complete.
D Optimizing network with conv_1xn_transform, proposal_opt, c2drv_convert_axis, c2drv_convert_shape, c2drv_convert_array, c2drv_cast_dtype, c2drv_trans_data
I Building data ...
I Packing data ...
D Packing convolution_0_1 ...
D Quantize @convolution_0_1:bias to dynamic_fixed_point.
D Quantize @convolution_0_1:weight to dynamic_fixed_point.
D Packing convolution_100_235 ...
D Quantize @convolution_100_235:bias to dynamic_fixed_point.
D Quantize @convolution_100_235:weight to dynamic_fixed_point.
D Packing convolution_101_238 ...
D Quantize @convolution_101_238:bias to dynamic_fixed_point.
D Quantize @convolution_101_238:weight to dynamic_fixed_point.
D Packing convolution_102_241 ...
D Quantize @convolution_102_241:bias to dynamic_fixed_point.
D Quantize @convolution_102_241:weight to dynamic_fixed_point.
D Packing convolution_103_244 ...
D Quantize @convolution_103_244:bias to dynamic_fixed_point.
D Quantize @convolution_103_244:weight to dynamic_fixed_point.
D Packing convolution_104_247 ...
D Quantize @convolution_104_247:bias to dynamic_fixed_point.
D Quantize @convolution_104_247:weight to dynamic_fixed_point.
D Packing convolution_105_250 ...
D Quantize @convolution_105_250:bias to dynamic_fixed_point.
D Quantize @convolution_105_250:weight to dynamic_fixed_point.
D Packing convolution_10_27 ...
D Quantize @convolution_10_27:bias to dynamic_fixed_point.
D Quantize @convolution_10_27:weight to dynamic_fixed_point.
D Packing convolution_12_31 ...
D Quantize @convolution_12_31:bias to dynamic_fixed_point.
D Quantize @convolution_12_31:weight to dynamic_fixed_point.
D Packing convolution_13_34 ...
D Quantize @convolution_13_34:bias to dynamic_fixed_point.
D Quantize @convolution_13_34:weight to dynamic_fixed_point.
D Packing convolution_14_37 ...
D Quantize @convolution_14_37:bias to dynamic_fixed_point.
D Quantize @convolution_14_37:weight to dynamic_fixed_point.
D Packing convolution_16_41 ...
D Quantize @convolution_16_41:bias to dynamic_fixed_point.
D Quantize @convolution_16_41:weight to dynamic_fixed_point.
D Packing convolution_17_44 ...
D Quantize @convolution_17_44:bias to dynamic_fixed_point.
D Quantize @convolution_17_44:weight to dynamic_fixed_point.
D Packing convolution_19_48 ...
D Quantize @convolution_19_48:bias to dynamic_fixed_point.
D Quantize @convolution_19_48:weight to dynamic_fixed_point.
D Packing convolution_1_4 ...
D Quantize @convolution_1_4:bias to dynamic_fixed_point.
D Quantize @convolution_1_4:weight to dynamic_fixed_point.
D Packing convolution_20_51 ...
D Quantize @convolution_20_51:bias to dynamic_fixed_point.
D Quantize @convolution_20_51:weight to dynamic_fixed_point.
D Packing convolution_22_55 ...
D Quantize @convolution_22_55:bias to dynamic_fixed_point.
D Quantize @convolution_22_55:weight to dynamic_fixed_point.
D Packing convolution_23_58 ...
D Quantize @convolution_23_58:bias to dynamic_fixed_point.
D Quantize @convolution_23_58:weight to dynamic_fixed_point.
D Packing convolution_25_62 ...
D Quantize @convolution_25_62:bias to dynamic_fixed_point.
D Quantize @convolution_25_62:weight to dynamic_fixed_point.
D Packing convolution_26_65 ...
D Quantize @convolution_26_65:bias to dynamic_fixed_point.
D Quantize @convolution_26_65:weight to dynamic_fixed_point.
D Packing convolution_28_69 ...
D Quantize @convolution_28_69:bias to dynamic_fixed_point.
D Quantize @convolution_28_69:weight to dynamic_fixed_point.
D Packing convolution_29_72 ...
D Quantize @convolution_29_72:bias to dynamic_fixed_point.
D Quantize @convolution_29_72:weight to dynamic_fixed_point.
D Packing convolution_2_7 ...
D Quantize @convolution_2_7:bias to dynamic_fixed_point.
D Quantize @convolution_2_7:weight to dynamic_fixed_point.
D Packing convolution_31_76 ...
D Quantize @convolution_31_76:bias to dynamic_fixed_point.
D Quantize @convolution_31_76:weight to dynamic_fixed_point.
D Packing convolution_32_79 ...
D Quantize @convolution_32_79:bias to dynamic_fixed_point.
D Quantize @convolution_32_79:weight to dynamic_fixed_point.
D Packing convolution_34_83 ...
D Quantize @convolution_34_83:bias to dynamic_fixed_point.
D Quantize @convolution_34_83:weight to dynamic_fixed_point.
D Packing convolution_35_86 ...
D Quantize @convolution_35_86:bias to dynamic_fixed_point.
D Quantize @convolution_35_86:weight to dynamic_fixed_point.
D Packing convolution_37_90 ...
D Quantize @convolution_37_90:bias to dynamic_fixed_point.
D Quantize @convolution_37_90:weight to dynamic_fixed_point.
D Packing convolution_38_93 ...
D Quantize @convolution_38_93:bias to dynamic_fixed_point.
D Quantize @convolution_38_93:weight to dynamic_fixed_point.
D Packing convolution_39_96 ...
D Quantize @convolution_39_96:bias to dynamic_fixed_point.
D Quantize @convolution_39_96:weight to dynamic_fixed_point.
D Packing convolution_3_10 ...
D Quantize @convolution_3_10:bias to dynamic_fixed_point.
D Quantize @convolution_3_10:weight to dynamic_fixed_point.
D Packing convolution_41_100 ...
D Quantize @convolution_41_100:bias to dynamic_fixed_point.
D Quantize @convolution_41_100:weight to dynamic_fixed_point.
D Packing convolution_42_103 ...
D Quantize @convolution_42_103:bias to dynamic_fixed_point.
D Quantize @convolution_42_103:weight to dynamic_fixed_point.
D Packing convolution_44_107 ...
D Quantize @convolution_44_107:bias to dynamic_fixed_point.
D Quantize @convolution_44_107:weight to dynamic_fixed_point.
D Packing convolution_45_110 ...
D Quantize @convolution_45_110:bias to dynamic_fixed_point.
D Quantize @convolution_45_110:weight to dynamic_fixed_point.
D Packing convolution_47_114 ...
D Quantize @convolution_47_114:bias to dynamic_fixed_point.
D Quantize @convolution_47_114:weight to dynamic_fixed_point.
D Packing convolution_48_117 ...
D Quantize @convolution_48_117:bias to dynamic_fixed_point.
D Quantize @convolution_48_117:weight to dynamic_fixed_point.
D Packing convolution_50_121 ...
D Quantize @convolution_50_121:bias to dynamic_fixed_point.
D Quantize @convolution_50_121:weight to dynamic_fixed_point.
D Packing convolution_51_124 ...
D Quantize @convolution_51_124:bias to dynamic_fixed_point.
D Quantize @convolution_51_124:weight to dynamic_fixed_point.
D Packing convolution_53_128 ...
D Quantize @convolution_53_128:bias to dynamic_fixed_point.
D Quantize @convolution_53_128:weight to dynamic_fixed_point.
D Packing convolution_54_131 ...
D Quantize @convolution_54_131:bias to dynamic_fixed_point.
D Quantize @convolution_54_131:weight to dynamic_fixed_point.
D Packing convolution_56_135 ...
D Quantize @convolution_56_135:bias to dynamic_fixed_point.
D Quantize @convolution_56_135:weight to dynamic_fixed_point.
D Packing convolution_57_138 ...
D Quantize @convolution_57_138:bias to dynamic_fixed_point.
D Quantize @convolution_57_138:weight to dynamic_fixed_point.
D Packing convolution_59_142 ...
D Quantize @convolution_59_142:bias to dynamic_fixed_point.
D Quantize @convolution_59_142:weight to dynamic_fixed_point.
D Packing convolution_5_14 ...
D Quantize @convolution_5_14:bias to dynamic_fixed_point.
D Quantize @convolution_5_14:weight to dynamic_fixed_point.
D Packing convolution_60_145 ...
D Quantize @convolution_60_145:bias to dynamic_fixed_point.
D Quantize @convolution_60_145:weight to dynamic_fixed_point.
D Packing convolution_62_149 ...
D Quantize @convolution_62_149:bias to dynamic_fixed_point.
D Quantize @convolution_62_149:weight to dynamic_fixed_point.
D Packing convolution_63_152 ...
D Quantize @convolution_63_152:bias to dynamic_fixed_point.
D Quantize @convolution_63_152:weight to dynamic_fixed_point.
D Packing convolution_64_155 ...
D Quantize @convolution_64_155:bias to dynamic_fixed_point.
D Quantize @convolution_64_155:weight to dynamic_fixed_point.
D Packing convolution_66_159 ...
D Quantize @convolution_66_159:bias to dynamic_fixed_point.
D Quantize @convolution_66_159:weight to dynamic_fixed_point.
D Packing convolution_67_162 ...
D Quantize @convolution_67_162:bias to dynamic_fixed_point.
D Quantize @convolution_67_162:weight to dynamic_fixed_point.
D Packing convolution_69_166 ...
D Quantize @convolution_69_166:bias to dynamic_fixed_point.
D Quantize @convolution_69_166:weight to dynamic_fixed_point.
D Packing convolution_6_17 ...
D Quantize @convolution_6_17:bias to dynamic_fixed_point.
D Quantize @convolution_6_17:weight to dynamic_fixed_point.
D Packing convolution_70_169 ...
D Quantize @convolution_70_169:bias to dynamic_fixed_point.
D Quantize @convolution_70_169:weight to dynamic_fixed_point.
D Packing convolution_72_173 ...
D Quantize @convolution_72_173:bias to dynamic_fixed_point.
D Quantize @convolution_72_173:weight to dynamic_fixed_point.
D Packing convolution_73_176 ...
D Quantize @convolution_73_176:bias to dynamic_fixed_point.
D Quantize @convolution_73_176:weight to dynamic_fixed_point.
D Packing convolution_75_180 ...
D Quantize @convolution_75_180:bias to dynamic_fixed_point.
D Quantize @convolution_75_180:weight to dynamic_fixed_point.
D Packing convolution_76_183 ...
D Quantize @convolution_76_183:bias to dynamic_fixed_point.
D Quantize @convolution_76_183:weight to dynamic_fixed_point.
D Packing convolution_77_186 ...
D Quantize @convolution_77_186:bias to dynamic_fixed_point.
D Quantize @convolution_77_186:weight to dynamic_fixed_point.
D Packing convolution_78_189 ...
D Quantize @convolution_78_189:bias to dynamic_fixed_point.
D Quantize @convolution_78_189:weight to dynamic_fixed_point.
D Packing convolution_79_192 ...
D Quantize @convolution_79_192:bias to dynamic_fixed_point.
D Quantize @convolution_79_192:weight to dynamic_fixed_point.
D Packing convolution_7_20 ...
D Quantize @convolution_7_20:bias to dynamic_fixed_point.
D Quantize @convolution_7_20:weight to dynamic_fixed_point.
D Packing convolution_80_195 ...
D Quantize @convolution_80_195:bias to dynamic_fixed_point.
D Quantize @convolution_80_195:weight to dynamic_fixed_point.
D Packing convolution_81_198 ...
D Quantize @convolution_81_198:bias to dynamic_fixed_point.
D Quantize @convolution_81_198:weight to dynamic_fixed_point.
D Packing convolution_84_201 ...
D Quantize @convolution_84_201:bias to dynamic_fixed_point.
D Quantize @convolution_84_201:weight to dynamic_fixed_point.
D Packing convolution_87_206 ...
D Quantize @convolution_87_206:bias to dynamic_fixed_point.
D Quantize @convolution_87_206:weight to dynamic_fixed_point.
D Packing convolution_88_209 ...
D Quantize @convolution_88_209:bias to dynamic_fixed_point.
D Quantize @convolution_88_209:weight to dynamic_fixed_point.
D Packing convolution_89_212 ...
D Quantize @convolution_89_212:bias to dynamic_fixed_point.
D Quantize @convolution_89_212:weight to dynamic_fixed_point.
D Packing convolution_90_215 ...
D Quantize @convolution_90_215:bias to dynamic_fixed_point.
D Quantize @convolution_90_215:weight to dynamic_fixed_point.
D Packing convolution_91_218 ...
D Quantize @convolution_91_218:bias to dynamic_fixed_point.
D Quantize @convolution_91_218:weight to dynamic_fixed_point.
D Packing convolution_92_221 ...
D Quantize @convolution_92_221:bias to dynamic_fixed_point.
D Quantize @convolution_92_221:weight to dynamic_fixed_point.
D Packing convolution_93_224 ...
D Quantize @convolution_93_224:bias to dynamic_fixed_point.
D Quantize @convolution_93_224:weight to dynamic_fixed_point.
D Packing convolution_96_227 ...
D Quantize @convolution_96_227:bias to dynamic_fixed_point.
D Quantize @convolution_96_227:weight to dynamic_fixed_point.
D Packing convolution_99_232 ...
D Quantize @convolution_99_232:bias to dynamic_fixed_point.
D Quantize @convolution_99_232:weight to dynamic_fixed_point.
D Packing convolution_9_24 ...
D Quantize @convolution_9_24:bias to dynamic_fixed_point.
D Quantize @convolution_9_24:weight to dynamic_fixed_point.
I Saving data to yolov3.export.data
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_yolov3.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_yolov3.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_post_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_post_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_pre_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_pre_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_global.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/main.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/BUILD
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/yolov3.vcxproj
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/makefile.linux
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/.cproject
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/.project
D Generate fake input /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/input_0_0.tensor
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/*.nb': No such file or directory
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/*.dat': No such file or directory
I Dump nbg input meta to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/nbg_meta.json
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_yolov3.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_yolov3.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_post_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_post_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_pre_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_pre_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/vnn_global.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/main.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/BUILD
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/yolov3.vcxproj
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/makefile.linux
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/.cproject
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify/.project
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/network_binary.nb': No such file or directory
/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify
customer:input,0,0:output,0,1:output,1,2:output,2,3:
*********************************
/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo
/
/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo_nbg_unify
I ----------------Error(0),Warning(0)----------------
ubuntu@khadas:~/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo$
ubuntu@khadas:~/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo$ ll
total 301192
drwxr-xr-x 1 ubuntu ubuntu       896 Mar 31 22:21 ./
drwxr-xr-x 1 ubuntu ubuntu       288 Mar 31 18:28 ../
-rwxr--r-- 1 ubuntu ubuntu      1419 Mar 31 16:44 0_import_model.sh*
-rwxr--r-- 1 ubuntu ubuntu       954 Mar 31 14:36 1_quantize_model.sh*
-rwxr--r-- 1 ubuntu ubuntu      1038 Mar 31 18:35 2_export_case_code.sh*
-rw-rw-r-- 1 ubuntu ubuntu       567 Mar 31 22:21 BUILD
-rw-rw-r-- 1 ubuntu ubuntu     38377 Mar 31 22:21 .cproject
drwxr-xr-x 1 ubuntu ubuntu       224 Mar 31 14:49 data/
drwxrwxr-x 1 ubuntu ubuntu        64 Mar 31 18:30 demo_nbg_unify/
-rwxr--r-- 1 ubuntu ubuntu       666 Jul  4  2021 extractoutput.py*
-rwxr--r-- 1 ubuntu ubuntu       760 Jul  4  2021 inference.sh*
-rw-rw-r-- 1 ubuntu ubuntu      6435 Mar 31 22:21 main.c
-rw-rw-r-- 1 ubuntu ubuntu      2018 Mar 31 22:21 makefile.linux
drwxr-xr-x 1 ubuntu ubuntu        96 Jul  4  2021 model/
-rw-rw-r-- 1 ubuntu ubuntu      2189 Mar 31 22:21 .project
-rw-rw-r-- 1 ubuntu ubuntu       685 Mar 31 22:21 vnn_global.h
-rw-rw-r-- 1 ubuntu ubuntu      4153 Mar 31 22:21 vnn_post_process.c
-rw-rw-r-- 1 ubuntu ubuntu       572 Mar 31 22:21 vnn_post_process.h
-rw-rw-r-- 1 ubuntu ubuntu     24836 Mar 31 22:21 vnn_pre_process.c
-rw-rw-r-- 1 ubuntu ubuntu      1646 Mar 31 22:21 vnn_pre_process.h
-rw-rw-r-- 1 ubuntu ubuntu    237480 Mar 31 22:21 vnn_yolov3.c
-rw-rw-r-- 1 ubuntu ubuntu      1161 Mar 31 22:21 vnn_yolov3.h
-rw-rw-r-- 1 ubuntu ubuntu 246109509 Mar 31 22:21 yolov3.data
-rw-rw-r-- 1 ubuntu ubuntu  61603564 Mar 31 22:21 yolov3.export.data
-rw-rw-r-- 1 ubuntu ubuntu    107638 Mar 31 22:19 yolov3.json
-rw-rw-r-- 1 ubuntu ubuntu     72854 Mar 31 22:21 yolov3.quantize
-rw-rw-r-- 1 ubuntu ubuntu     12814 Mar 31 22:21 yolov3.vcxproj

@Tony-Wang 你第二个脚本和第三个脚本的--reorder-channel参数没有保持一致

谢谢提醒,修改过后,并没有什么改变。还是原来的情况。

@Tony-Wang 你有试过darknet官方的配置和权重文件么?我这边测试了官方的是正常的。我看你的脚本参数没什么问题。

I Saving data to yolov3.export.data
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_yolov3.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_yolov3.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_post_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_post_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_pre_process.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_pre_process.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/vnn_global.h
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/main.c
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/BUILD
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/yolov3.vcxproj
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/makefile.linux
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/.cproject
I Save vx network source file to /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/.project
D Generate fake input /home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/input_0_0.tensor
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/*.nb': No such file or directory
mv: cannot stat '/home/ubuntu/mnt/DDK_6.4.6.2/tool/acuity-toolkit-binary-5.21.1/demo/*.dat': No such file or directory
Traceback (most recent call last):
  File "ovxgenerator.py", line 196, in <module>
  File "ovxgenerator.py", line 187, in main
  File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 696, in generate
  File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 668, in _gen_special_case
  File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 129, in _build_nb_netdict
FileNotFoundError: [Errno 2] No such file or directory: 'graph.json'
[19276] Failed to execute script ovxgenerator
mv: cannot stat '*.lib': No such file or directory

官方的我不能生成 graph.json 文件,把我的 weights 和 cfg 给你看看?

你这是一样的问题,你对SDK做了什么修改?

没有做修改,我使用的是6.4.6.2的预编译版本。

@Tony-Wang

你按照文档重新走一遍,先转换下自带的模型,确认你的环境没有问题

@Tony-Wang 目前没有,目前只能通过这种方式clone,如果需要我这边打包提供给你。

是需要指定commit的。你可以看下子仓库是怎么使用的,会指定固定的commit,你看最新的代码是没有的,这和你手里的SDK指定的commit是不同的

谢谢,在你的帮助下,我完成了整个流程。
有几个问题不太明白。
一、是不是目前只能用这种方式得到识别结果的置信率。

        int classId = resultData.result_name[i].lable_id;
        std::string label = resultData.result_name[i].lable_name;
        float confidence = std::stoi(label.substr(label.find(32) + 1, 2)) / 100.0;

二、libnn_yolo_v3.so 是不是可以设置推理的置信率。

@Tony-Wang 源码是开放的,你可以研究一下libnn_yolo_v3.so的源码

Hi Frank
This document it seems doesn’t exist anymore. I am trying to use aml_npu_sdk to run a custom face recognition model on the VIM3 board, but it’s result is very bad.
Also I don’t get any error while converting my model. Any suggestion?

Hello @Hamze_Asadi ,

How bad the result is? Can detect less face than origin model or detect nothing ?

If it is former, i suggest as follows.
First, please make sure the number of quantized images is at least two hundred.
Then, please check --batch-size and --iterations in 1_quantize_model.sh. Make sure batch-size×iteration=the images number.
If above are fine, please choose quantize model to int16.

If it is latter, please refer to aml_npu_sdk/docs/en/NN Tool FAQ (0.5).pdf page 12 title 4.2. Check it is converting problem or not.

The output is random,
this my conversion script:
./convert
–model-name fr_model
–platform onnx
–model ‘/acuity-toolkit/app/model/native/model.onnx’
–mean-values ‘0 0 0 0.0039215686275’
–quantized-dtype ‘asymmetric_affine’
–qtype ‘uint8’
–batch-size 61
–iterations 7
–inputs ‘t.1’
–input-size-list ‘1,120,120’
–size-with-batch ‘False’
–source-files ‘/acuity-toolkit/app/dataset/mydata/meta_jpg.txt’
–input-dtype-list ‘uint8’
–kboard VIM3 --print-level 1

Hello @Hamze_Asadi ,

There are nothing wrong if the model parameters are right.

The model parameters are ok, but the result is really bad. few questions regarding the conversion:
1- Is it possible the accuracy drop is because of quantization. I mean I know with quantization you lost a bit of accuracy but my model after quantization is almost random.
2- In documentation there is a parameter named fl which describe how many bits are used for decimal part. I used int8 for quantization and I’ve got fl=12 which it doesn’t make any sense since this value should be less than 8. any idea why this happen?