零基础导入自己YOLOv3或YOLOv3-Tiny模型

前言:通过 零基础入门darknet-YOLO3及YOLOv3-Tiny 文档操作,我们已经有了自己训练的YOLO3或YOLOv3-Tiny模型,接下来一步步演示如何转换自己的模型,并适配进我们VIM3 android平台的khadas_android_npu_app。关于VIM3 ubuntu平台请参考此文档

一,转换模型

模型转换流程图:

1,导入模型

当前模型转换过程都是在 acuity-toolkit 目录下进行:

cd {workspace}/aml_npu_sdk/acuity-toolkit
cp {workspace}/yolov3-khadas_ai.cfg_train demo/model/
cp {workspace}/yolov3-khadas_ai_last.weights demo/model/
cp {workspace}/test.jpg demo/model/

如下修改:

hlm@Server:/users/hlm/npu/aml_npu_sdk/acuity-toolkit/demo$ git diff
diff --git a/acuity-toolkit/demo/0_import_model.sh b/acuity-toolkit/demo/0_import_model.sh
index 3198810..4671c9f 100755
--- a/acuity-toolkit/demo/0_import_model.sh
+++ b/acuity-toolkit/demo/0_import_model.sh
@@ -1,6 +1,6 @@
 #!/bin/bash
 
-NAME=mobilenet_tf
+NAME=yolov3
 ACUITY_PATH=../bin/
 
 convert_caffe=${ACUITY_PATH}convertcaffe
@@ -11,13 +11,13 @@ convert_onnx=${ACUITY_PATH}convertonnx
 convert_keras=${ACUITY_PATH}convertkeras
 convert_pytorch=${ACUITY_PATH}convertpytorch
 
-$convert_tf \
-    --tf-pb ./model/mobilenet_v1.pb \
-    --inputs input \
-    --input-size-list '224,224,3' \
-    --outputs MobilenetV1/Predictions/Softmax \
-    --net-output ${NAME}.json \
-    --data-output ${NAME}.data 
+#$convert_tf \
+#    --tf-pb ./model/mobilenet_v1.pb \
+#    --inputs input \
+#    --input-size-list '224,224,3' \
+#    --outputs MobilenetV1/Predictions/Softmax \
+#    --net-output ${NAME}.json \
+#    --data-output ${NAME}.data 
        
 #$convert_caffe \
 #    --caffe-model xx.prototxt   \
@@ -30,11 +30,11 @@ $convert_tf \
 #    --net-output ${NAME}.json \
 #    --data-output ${NAME}.data 
 
-#$convert_darknet \
-#    --net-input xxx.cfg \
-#      --weight-input xxx.weights \
-#    --net-output ${NAME}.json \
-#    --data-output ${NAME}.data 
+$convert_darknet \
+    --net-input ./model/yolov3-khadas_ai.cfg_train \
+    --weight-input ./model/yolov3-khadas_ai_last.weights \
+    --net-output ${NAME}.json \
+    --data-output ${NAME}.data 

--- a/acuity-toolkit/demo/data/validation_tf.txt
+++ b/acuity-toolkit/demo/data/validation_tf.txt
@@ -1 +1 @@
-./space_shuttle_224.jpg, 813
+./test.jpg

执行对应脚本:

bash 0_import_model.sh 

2,对模型进行量化

hlm@Server:/users/hlm/npu/aml_npu_sdk/acuity-toolkit/demo$ git diff
diff --git a/acuity-toolkit/demo/1_quantize_model.sh b/acuity-toolkit/demo/1_quantize_model.sh
index 630ea7f..ee7bd00 100755
--- a/acuity-toolkit/demo/1_quantize_model.sh
+++ b/acuity-toolkit/demo/1_quantize_model.sh
@@ -1,6 +1,6 @@
 #!/bin/bash
 
-NAME=mobilenet_tf
+NAME=yolov3
 ACUITY_PATH=../bin/
 
 tensorzone=${ACUITY_PATH}tensorzonex
@@ -11,12 +11,12 @@ $tensorzone \
     --dtype float32 \
     --source text \
     --source-file data/validation_tf.txt \
-    --channel-mean-value '128 128 128 128' \
-    --reorder-channel '0 1 2' \
+    --channel-mean-value '0 0 0 256' \
+    --reorder-channel '2 1 0' \
     --model-input ${NAME}.json \
     --model-data ${NAME}.data \
     --model-quantize ${NAME}.quantize \
-    --quantized-dtype asymmetric_affine-u8 \
+    --quantized-dtype dynamic_fixed_point-i8 \

注意,这里 --quantized-dtype 跟ubuntu平台不一样。执行对应脚本:

bash 1_quantize_model.sh

3,生成 case 代码

hlm@Server:/users/hlm/npu/aml_npu_sdk/acuity-toolkit/demo$ git diff
diff --git a/acuity-toolkit/demo/2_export_case_code.sh b/acuity-toolkit/demo/2_export_case_code.sh
index 85b101b..867c5b9 100755
--- a/acuity-toolkit/demo/2_export_case_code.sh
+++ b/acuity-toolkit/demo/2_export_case_code.sh
@@ -1,6 +1,6 @@
 #!/bin/bash
 
-NAME=mobilenet_tf
+NAME=yolov3
 ACUITY_PATH=../bin/
 
 export_ovxlib=${ACUITY_PATH}ovxgenerator
@@ -8,8 +8,8 @@ export_ovxlib=${ACUITY_PATH}ovxgenerator
 $export_ovxlib \
     --model-input ${NAME}.json \
     --data-input ${NAME}.data \
-    --reorder-channel '0 1 2' \
-    --channel-mean-value '128 128 128 128' \
+    --reorder-channel '2 1 0' \
+    --channel-mean-value '0 0 0 256' \

执行对应脚本:

bash 2_export_case_code.sh

当然只要运行脚本没报错,后续你可以按照上面修改完成后,一次性执行脚本:

bash 0_import_model.sh && bash 1_quantize_model.sh  && bash 2_export_case_code.sh

最终会产生一个nbg_unify_yolov3目录:

二,导入到VIM3 android平台的demo app运行

1, 安装ndk 编译环境

wget https://dl.google.com/android/repository/android-ndk-r17-linux-x86_64.zip
unzip android-ndk-r17-linux-x86_64.zip
vim ~/.bashrc
##添加下面两行代码到文件末尾
##export NDKROOT=/path/to/android-ndk-r17
##export PATH=$NDKROOT:$PATH

如图所示:

然后在你要使用的ssh上执行此命令:

source ~/.bashrc

2, 编译相关so库

(1) 下载khadas_android_npu_library

git clone https://gitlab.com/khadas/khadas_android_npu_library -b khadas_ai

(2) 替换nbg_unify_yolov3目录 vnn_pre_process.hvnn_post_process.hvnn_yolov3.hvnn_yolov3.c文件到如下对应目录。

cp {workspace}/nbg_unify_yolov3/vnn_pre_process.h {workspace}/khadas_android_npu_library/model_code/detect_yolo_v3/jni/include/
cp {workspace}/nbg_unify_yolov3/vnn_post_process.h {workspace}/khadas_android_npu_library/model_code/detect_yolo_v3/jni/include/
cp {workspace}/nbg_unify_yolov3/vnn_yolov3.h {workspace}/khadas_android_npu_library/model_code/detect_yolo_v3/jni/include/
cp {workspace}/nbg_unify_yolov3/vnn_yolov3.c {workspace}/khadas_android_npu_library/model_code/detect_yolo_v3/jni/

(3) 修改yolov3_process.c
我们demo是两个类(KuLi, DuLanTe), 修改class数组并修改num_classLISTSIZE

static char *coco_names[] = {"KuLi","DuLanTe"};


其中LISTSIZE = (num_class + 5 ) = 2+5 = 7

(4) 编译出libnn_yolo_v3.so

cd {workspace}/khadas_android_npu_library/model_code/detect_yolo_v3
ndk-build

3, 导入相关库到demo app运行

(1) 下载demo app

git clone https://github.com/khadas/khadas_android_npu_app -b khadas_ai

(2) 替换libnn_yolo_v3.so

cp {workspace}/khadas_android_npu_library/model_code/detect_yolo_v3/libs/armeabi-v7a/libnn_yolo_v3.so {workspace}/khadas_android_npu_app/app/libs/armeabi-v7a/

(3) 替换nb文件

cp {workspace}/acuity-toolkit/demo/nbg_unify_yolov3/yolov3.nb {workspace}/khadas_android_npu_app/app/src/main/assets/yolov3_88.nb

(4) 大功告成,把khadas_android_npu_app工程代码导入android studio中编译运行在VIM3板子上。


补充:YOLOv3-Tiny

1,导入模型

当前模型转换过程都是在 acuity-toolkit 目录下进行:

cd {workspace}/aml_npu_sdk/acuity-toolkit
cp {workspace}/yolov3-khadas_ai_tiny.cfg_train demo/model/
cp {workspace}/yolov3-khadas_ai_tiny_last.weights demo/model/
cp {workspace}/test.jpg demo/model/

如下修改:

hlm@Server:/users/hlm/npu/aml_npu_sdk/acuity-toolkit/demo$ git diff
diff --git a/acuity-toolkit/demo/0_import_model.sh b/acuity-toolkit/demo/0_import_model.sh
index 3198810..3b4efd3 100755
--- a/acuity-toolkit/demo/0_import_model.sh
+++ b/acuity-toolkit/demo/0_import_model.sh
@@ -1,6 +1,6 @@
 #!/bin/bash
 
-NAME=mobilenet_tf
+NAME=yolotiny
 ACUITY_PATH=../bin/
 
 convert_caffe=${ACUITY_PATH}convertcaffe
@@ -11,13 +11,13 @@ convert_onnx=${ACUITY_PATH}convertonnx
 convert_keras=${ACUITY_PATH}convertkeras
 convert_pytorch=${ACUITY_PATH}convertpytorch
 
-$convert_tf \
-    --tf-pb ./model/mobilenet_v1.pb \
-    --inputs input \
-    --input-size-list '224,224,3' \
-    --outputs MobilenetV1/Predictions/Softmax \
-    --net-output ${NAME}.json \
-    --data-output ${NAME}.data 
+#$convert_tf \
+#    --tf-pb ./model/mobilenet_v1.pb \
+#    --inputs input \
+#    --input-size-list '224,224,3' \
+#    --outputs MobilenetV1/Predictions/Softmax \
+#    --net-output ${NAME}.json \
+#    --data-output ${NAME}.data 
        
 #$convert_caffe \
 #    --caffe-model xx.prototxt   \
@@ -30,11 +30,11 @@ $convert_tf \
 #    --net-output ${NAME}.json \
 #    --data-output ${NAME}.data 
 
-#$convert_darknet \
-#    --net-input xxx.cfg \
-#      --weight-input xxx.weights \
-#    --net-output ${NAME}.json \
-#    --data-output ${NAME}.data 
+$convert_darknet \
+    --net-input ./model/yolov3-khadas_ai_tiny.cfg_train \
+       --weight-input ./model/yolov3-khadas_ai_tiny_last.weights \
+    --net-output ${NAME}.json \
+    --data-output ${NAME}.data 

--- a/acuity-toolkit/demo/data/validation_tf.txt
+++ b/acuity-toolkit/demo/data/validation_tf.txt
@@ -1 +1 @@
-./space_shuttle_224.jpg, 813
+./test.jpg

2,对模型进行量化

hlm@Server:/users/hlm/npu/aml_npu_sdk/acuity-toolkit/demo$ git diff
diff --git a/acuity-toolkit/demo/1_quantize_model.sh b/acuity-toolkit/demo/1_quantize_model.sh
index 630ea7f..ee7bd00 100755
--- a/acuity-toolkit/demo/1_quantize_model.sh
+++ b/acuity-toolkit/demo/1_quantize_model.sh
@@ -1,6 +1,6 @@
 #!/bin/bash
 
-NAME=mobilenet_tf
+NAME=yolotiny
 ACUITY_PATH=../bin/
 
 tensorzone=${ACUITY_PATH}tensorzonex
@@ -11,12 +11,12 @@ $tensorzone \
     --dtype float32 \
     --source text \
     --source-file data/validation_tf.txt \
-    --channel-mean-value '128 128 128 128' \
-    --reorder-channel '0 1 2' \
+    --channel-mean-value '0 0 0 256' \
+    --reorder-channel '2 1 0' \
     --model-input ${NAME}.json \
     --model-data ${NAME}.data \
     --model-quantize ${NAME}.quantize \
-    --quantized-dtype asymmetric_affine-u8 \
+    --quantized-dtype dynamic_fixed_point-i8 \

注意,这里 --quantized-dtype 跟ubuntu平台不一样。执行对应脚本:

3,生成 case 代码

hlm@Server:/users/hlm/npu/aml_npu_sdk/acuity-toolkit/demo$ git diff
diff --git a/acuity-toolkit/demo/2_export_case_code.sh b/acuity-toolkit/demo/2_export_case_code.sh
index 85b101b..867c5b9 100755
--- a/acuity-toolkit/demo/2_export_case_code.sh
+++ b/acuity-toolkit/demo/2_export_case_code.sh
@@ -1,6 +1,6 @@
 #!/bin/bash
 
-NAME=mobilenet_tf
+NAME=yolotiny
 ACUITY_PATH=../bin/
 
 export_ovxlib=${ACUITY_PATH}ovxgenerator
@@ -8,8 +8,8 @@ export_ovxlib=${ACUITY_PATH}ovxgenerator
 $export_ovxlib \
     --model-input ${NAME}.json \
     --data-input ${NAME}.data \
-    --reorder-channel '0 1 2' \
-    --channel-mean-value '128 128 128 128' \
+    --reorder-channel '2 1 0' \
+    --channel-mean-value '0 0 0 256' \

按照上面修改完成后,一次性执行脚本:

bash 0_import_model.sh && bash 1_quantize_model.sh  && bash 2_export_case_code.sh

最终在demo目录下会产生一个nbg_unify_yolotiny目录:

二,导入到VIM3 android平台的demo app运行

1, 安装ndk 编译环境

wget https://dl.google.com/android/repository/android-ndk-r17-linux-x86_64.zip
unzip android-ndk-r17-linux-x86_64.zip
vim ~/.bashrc
##添加下面两行代码到文件末尾
##export NDKROOT=/path/to/android-ndk-r17
##export PATH=$NDKROOT:$PATH

如图所示:

然后在你要使用的ssh上执行此命令:

source ~/.bashrc

2, 编译相关so库

(1) 下载khadas_android_npu_library

git clone https://gitlab.com/khadas/khadas_android_npu_library -b khadas_ai

(2) 增加khadas_android_npu_library/model_code/detect_yolo_tiny/目录及相关代码

已经整理好相关代码,具体差异可以跟yolov3对比,主要差异在*_process.c文件上,如下图:

(3) 替换nbg_unify_yolotiny目录 vnn_pre_process.hvnn_post_process.hvnn_yolotiny.hvnn_yolotiny.c文件到如下对应目录。

cp {workspace}/nbg_unify_yolotiny/vnn_pre_process.h {workspace}/khadas_android_npu_library/model_code/detect_yolo_tiny/jni/include/
cp {workspace}/nbg_unify_yolotiny/vnn_post_process.h {workspace}/khadas_android_npu_library/model_code/detect_yolo_tiny/jni/include/
cp {workspace}/nbg_unify_yolotiny/vnn_yolotiny.h {workspace}/khadas_android_npu_library/model_code/detect_yolo_tiny/jni/include/
cp {workspace}/nbg_unify_yolotiny/vnn_yolotiny.c {workspace}/khadas_android_npu_library/model_code/detect_yolo_tiny/jni/

(4) 修改yolo_tiny_process.c
我们demo是两个类(KuLi, DuLanTe), 修改class数组并修改num_classLISTSIZE

static char *coco_names[] = {"KuLi","DuLanTe"};


其中LISTSIZE = (num_class + 5 ) = 2+5 = 7

(4) 编译出libnn_yolo_tiny.so

cd {workspace}/khadas_android_npu_library/model_code/detect_yolo_tiny
ndk-build

(5) 编译出libkhadas_npu_jni.so

首先要添加如下代码:

hlm@Server:/users/hlm/npu/khadas_android_npu_library/detect_code$ git diff
diff --git a/detect_code/jni/khadas_npu_det.cpp b/detect_code/jni/khadas_npu_det.cpp
index 7db2632..f90862e
--- a/detect_code/jni/khadas_npu_det.cpp
+++ b/detect_code/jni/khadas_npu_det.cpp
@@ -159,6 +159,9 @@ static jint npu_det_set_model(JNIEnv *env, jclass clazz __unused,jint modelType)
                case 2:
                type = DET_YOLO_V3;
                break;
+               case 3:
+               type = DET_YOLO_TINY;
+               break;          
                default:
                type = DET_FACENET;
                break;
@@ -201,6 +204,9 @@ static jint npu_det_get_result(JNIEnv *env, jclass clazz __unused,jobject detres
                case 2:
                type = DET_YOLO_V3;
                break;
+               case 3:
+               type = DET_YOLO_TINY;
+               break;                  
                default:
                type = DET_FACENET;
                break;
@@ -311,6 +317,9 @@ static jint npu_det_set_input(JNIEnv *env, jclass clazz __unused,jbyteArray imgb
                case 2:
                type = DET_YOLO_V3;
                break;
+               case 3:
+               type = DET_YOLO_TINY;
+               break;                  

编译:

hlm@Server:/users/hlm/npu/khadas_android_npu_library/detect_code$ ndk-build 

4, 导入相关库到demo app运行

(1) 下载demo app

git clone https://github.com/khadas/khadas_android_npu_app -b khadas_ai

(2) 替换libnn_yolo_tiny.so

cp {workspace}/khadas_android_npu_library/model_code/detect_yolo_tiny/libs/armeabi-v7a/libnn_yolo_tiny.so {workspace}/khadas_android_npu_app/app/libs/armeabi-v7a/

(3) 替换libkhadas_npu_jni.so

cp {workspace}/khadas_android_npu_library/detect_code/libs/armeabi-v7a/libkhadas_npu_jni.so {workspace}/khadas_android_npu_app/app/libs/armeabi-v7a/

(4) 替换nb文件

cp {workspace}/acuity-toolkit/demo/nbg_unify_yolotiny/yolotiny.nb {workspace}/khadas_android_npu_app/app/src/main/assets/yolotiny_88.nb

(5) app添加响应yolo_tiny按钮及功能
代码修改如下:

hlm@Server:/users/hlm/npu/khadas_android_npu_app$ git diff app/src/
diff --git a/app/src/main/java/com/khadas/npudemo/CameraActivity.java b/app/src/main/java/com/khadas/npudemo/CameraActivity.java
index 20f3e8d..dc91f60
--- a/app/src/main/java/com/khadas/npudemo/CameraActivity.java
+++ b/app/src/main/java/com/khadas/npudemo/CameraActivity.java
@@ -111,7 +111,8 @@ public abstract class CameraActivity extends AppCompatActivity implements Camera
     public enum ModeType {
         DET_YOLOFACE_V2,
         DET_YOLO_V2,
-        DET_YOLO_V3
+        DET_YOLO_V3,
+        DET_YOLO_TINY
     }
     static ModeType mode_type;
 
@@ -162,6 +163,12 @@ public abstract class CameraActivity extends AppCompatActivity implements Camera
                 } else {
                     in = assmgr.open("yolov3_99.nb");
                 }
+            }  if(mode_type == ModeType.DET_YOLO_TINY) {
+                if(mStrboard.equals("kvim3")) {
+                    in = assmgr.open("yolotiny_88.nb");
+                } else {
+                    in = assmgr.open("yolotiny_99.nb");
+                }
             }  if(mode_type == ModeType.DET_YOLOFACE_V2) {
                 if(mStrboard.equals("kvim3")) {
                     in = assmgr.open("yolo_face_88.nb");
@@ -278,7 +285,14 @@ public abstract class CameraActivity extends AppCompatActivity implements Camera
             }
             setmoderesult = inceptionv3.npu_det_set_model(mode_type.ordinal());
         }
-
+        if(mode_type == ModeType.DET_YOLO_TINY) {
+            if(mStrboard.equals("kvim3")) {
+                copyNbFile(this, "yolotiny_88.nb");
+            } else {
+                copyNbFile(this, "yolotiny_99.nb");
+            }
+            setmoderesult = inceptionv3.npu_det_set_model(mode_type.ordinal());
+        }
         if(mode_type == ModeType.DET_YOLOFACE_V2) {
             if(mStrboard.equals("kvim3")) {
                 copyNbFile(this, "yolo_face_88.nb");
diff --git a/app/src/main/java/com/khadas/npudemo/MainActivity.java b/app/src/main/java/com/khadas/npudemo/MainActivity.java
index e858c2a..e85d6a9
--- a/app/src/main/java/com/khadas/npudemo/MainActivity.java
+++ b/app/src/main/java/com/khadas/npudemo/MainActivity.java
@@ -13,11 +13,13 @@ import android.app.AlertDialog;
 import android.content.DialogInterface;
 
 public class MainActivity extends AppCompatActivity implements View.OnClickListener{
+    private Button button_yolotiny;
     private Button button_yolov3;
     private Button button_yolov2;
     private Button button_yoloface;
     public  static  final String Intent_key="modetype";
     private static final String TAG = "MainActivity";
+    AlertDialog.Builder alertDialog0;
     AlertDialog.Builder alertDialog;
     AlertDialog.Builder alertDialog2;
     AlertDialog.Builder alertDialog3;
@@ -25,7 +27,8 @@ public class MainActivity extends AppCompatActivity implements View.OnClickListe
     public enum ModeType {
         DET_YOLOFACE_V2,
         DET_YOLO_V2,
-        DET_YOLO_V3
+        DET_YOLO_V3,
+        DET_YOLO_TINY
     }
 
 
@@ -38,15 +41,35 @@ public class MainActivity extends AppCompatActivity implements View.OnClickListe
         TextView textView = (TextView)findViewById(R.id.title);
         textView.setText("model selection");
 
-
+        button_yolotiny = (Button) findViewById(R.id.button_yolotiny);
         button_yolov3 = (Button) findViewById(R.id.button_yolov3);
         button_yolov2 = (Button) findViewById(R.id.button_yolov2);
         button_yoloface = (Button) findViewById(R.id.button_yoloface);
 
+        button_yolotiny.setOnClickListener(this);
         button_yolov3.setOnClickListener(this);
         button_yolov2.setOnClickListener(this);
         button_yoloface.setOnClickListener(this);
 
+        alertDialog0 = new AlertDialog.Builder(MainActivity.this);
+        alertDialog0.setTitle("prompt");
+        alertDialog0.setMessage("yolotiny image recognition model will run");
+        alertDialog0.setNegativeButton("cancel", new DialogInterface.OnClickListener() {//<E6><B7><BB><E5><8A><A0><E5><8F><96><E6><B6><88>
+                    @Override
+                    public void onClick(DialogInterface dialogInterface, int i) {
+                        Log.e(TAG, "AlertDialog cancel");
+                        //onClickNo();
+                    }
+                })
+                .setPositiveButton("ok", new DialogInterface.OnClickListener() {//<E6><B7><BB><E5><8A><A0>"Yes"<E6><8C><89><E9><92><AE>
+                    @Override
+                    public void onClick(DialogInterface dialogInterface, int i) {
+                        Log.e(TAG, "AlertDialog ok");
+                        onClickYolovTiny();
+                    }
+                })
+                .create();
+
         alertDialog = new AlertDialog.Builder(MainActivity.this);
         alertDialog.setTitle("prompt");
         alertDialog.setMessage("yolov3 image recognition model will run");
@@ -112,6 +135,13 @@ public class MainActivity extends AppCompatActivity implements View.OnClickListe
     public void onClick(View v) {
         //Log.e(TAG,"OnClickListener");
         switch (v.getId()) {
+            case R.id.button_yolotiny:
+                Log.e(TAG, "button_yolotiny");
+                //onClickButton1(v);
+                alertDialog.setCancelable(false);//<E7><82><B9><E5><87><BB><E7><A9><BA><E7><99><BD><E5><A4><84><E4><B9><8B><E5><90><8E><E5><BC><B9><E5><87><BA><E6><A1><86><E4><B8><8D><E4><BC><9A>
<E6><B6><88><E5><A4><B1>
+                alertDialog.show();
+                buttonSetFocus(v);
+                break;
             case R.id.button_yolov3:
                 Log.e(TAG, "button_yolov3");
                 //onClickButton1(v);
@@ -146,7 +176,14 @@ public class MainActivity extends AppCompatActivity implements View.OnClickListe
         button.requestFocusFromTouch();
 
     }
-
+       
+    private void onClickYolovTiny() {
+        //<E5><A4><84><E7><90><86><E9><80><BB><E8><BE><91>
+        Log.e(TAG, "button_yolovtiny enter ");
+        Intent intent = new Intent(this,ClassifierActivity.class);
+        intent.putExtra(Intent_key,ModeType.DET_YOLO_TINY.ordinal());
+        startActivity(intent);
+    }
 
     private void onClickYolov3() {
         //<E5><A4><84><E7><90><86><E9><80><BB><E8><BE><91>
diff --git a/app/src/main/res/layout/activity_main.xml b/app/src/main/res/layout/activity_main.xml
index 41ba40c..e2216d2
--- a/app/src/main/res/layout/activity_main.xml
+++ b/app/src/main/res/layout/activity_main.xml
@@ -23,6 +23,12 @@
             android:text="model select"/>
 
         <Button
+            android:id="@+id/button_yolotiny"
+            android:layout_width="300dp"
+            android:layout_height="80dp"
+            style="?android:attr/buttonBarButtonStyle"
+            android:text="yolotiny model" />
+        <Button
             android:id="@+id/button_yolov3"
             android:layout_width="300dp"
             android:layout_height="80dp"

(6) 大功告成,把khadas_android_npu_app工程代码导入android studio中编译运行在VIM3板子上。



5 Likes

您好,Tiny模型转换我完全按照您的步骤下来,报错误: A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 in tid 15839 (ImageListener), pid 15790 (.khadas.npudemo)
我用的是官网的 coco_tiny.cfg 和 coco_tiny.weights 。希望能得到您的回复和帮助,非常感谢。

@guofq 你是自己已经解决了是吗?用官网的没验证过。最好还是先按照文档来操作吧。

没解决,我vim3系统是android的,yolo_tiny模型按照你的这个文档跑下来在android系统上直接闪崩;然后我现在又把vim3的系统刷成了ubuntu系统,准备跑一下yolo_tiny。谢谢你。
不知这里允许加q吗?我的是9-1-5-5-7-1-3-0-0,希望能进行技术交流

什么都不改,会闪崩吗?若不会,那你得自己对比下修改了啥。

非常感谢你的回复。
我周末又试了yolov3,weights和cfg用的官方的,我只是按官方教程跑了一遍流程,转模型是在PC虚拟机Ubuntu18.04里执行的 bash 0&1&2那个3个脚本,编译是在VIM3板子上(Ubuntu20.04)里的 aml_npu_app 下编译的yolov3.so,运行推理同样是在VIM3上的aml_npu_demo_binaries下替换了yolov3.so,运行结果是:Model_create_fail, file_path=nn_data,dev_type = 1, det_set_model fail. ret = -4,如果使用ndk-build编译出的android yolov3.so 在android上运行就会闪退。我稍后把yolov3的全部流程和附件贴上来,帮忙看下哪里出的问题。非常感谢

@guofq 你可能搞混了,android与ubuntu的npu使用是不同的。我这个文档是android的,所以请完全按照我写的步骤来操作,都在虚拟机上转换与编译。


@goenjoy 感谢。是我没有表述清楚。我其实在2个系统下都遇到了问题。
首先Android系统下,我完全按照你的这边文章跑下来,转换和编译确实都是在虚拟机下进行的,其中转换脚本跟你的完全一直,编译是在 khadas_android_npu_library 下对应的model_code里对应的模型执行 ndk-build编译的android so库,然后拷贝到android app工程运行时程序会闪退。因抓不到log所以无法提供更多的信息;
于是,我周末将vim3板子重新刷成了ubuntu系统,完全参考官方指导:https://docs.khadas.com/linux/zh-cn/vim3/ConvertToUseNPU.html( 转换并通过NPU调用自己的模型)
在虚拟下进行了转换,并在vim3板测进行编译。但是推理时提示如下错误:
khadas@Khadas:~/workspace/aml_npu_demo_binaries/detect_demo_picture$ ./detect_demo_x11 -m 2 -p 1080p.bmp
W Detect_api:[det_set_log_level:19]Set log level=1
W Detect_api:[det_set_log_level:21]output_format not support Imperfect, default to DET_LOG_TERMINAL
det_set_log_config Debug
#productname=VIPNano-QI, pid=0x88
I [vsi_nn_CreateGraph:478]OVXLIB_VERSION==1.1.27
D [setup_node:441]Setup node id[0] uid[0] op[NBG]
D [print_tensor:146]in(0) : id[ 0] vtl[0] const[0] shape[ 416, 416, 3, 1 ] fmt[i8 ] qnt[DFP fl= 7]
D [print_tensor:146]out(0): id[ 1] vtl[0] const[0] shape[ 13, 13, 255, 1 ] fmt[i8 ] qnt[DFP fl= 2]
D [print_tensor:146]out(1): id[ 2] vtl[0] const[0] shape[ 26, 26, 255, 1 ] fmt[i8 ] qnt[DFP fl= 2]
D [print_tensor:146]out(2): id[ 3] vtl[0] const[0] shape[ 52, 52, 255, 1 ] fmt[i8 ] qnt[DFP fl= 2]
D [optimize_node:385]Backward optimize neural network
D [optimize_node:392]Forward optimize neural network
I [compute_node:327]Create vx node
D [compute_node:350]Instance node[0] “NBG” …
generate command buffer, total device count=1, core count per-device: 1,
current device id=0, AXI SRAM base address=0xff000000
vxoBinaryGraph_CheckInputOutputParametes[3377]: quant dfp=1, run time dfp=2
nn patch output failed, please check your output format, output 2
fail to initial memory in generate states buffer
fail in import kernel from file initializer
Failed to initialize Kernel “yolov3_88” of Node 0x5589447eb0 (status = -1)E [model_create:64]CHECK STATUS(-1:A generic error code, used when no other describes the error.)
E Detect_api:[det_set_model:225]Model_create fail, file_path=nn_data, dev_type=1
det_set_model fail. ret=-4

@guofq 我这个案例没有在ubuntu下验证过。建议你用android来验证。
你可以把步骤详细记录下来。或者你可以用排除法,定位是哪里出了问题,比方通过替换库文件(.so或者nb文件)来查找是哪个出了问题。

这个问题你还没有回复呢?要先确认。。。。

老哥,请问可以用这个A311d来部署两个深度学习模型进行同时检测吗,目标检测的

一个模型不能实现你的需求吗?为啥要两个深度模型?

@liuyifa 模型需要裁剪,只要分配的内存够你使用,是可以同时使用两个模型的,带宽是够的

1 Like

你好,请问/aml_npu_sdk/acuity-toolkit/bin下面的转换工具怎么获得,没有的话第一步导入模型都不能进行(从 https://github.com/khadas/aml_npu_sdk里下载的bin文件夹里为空)

@purplepetal 该文件夹是作为子仓库存在的,你可以参考文档的命令,clone时带上参数

1 Like

@goenjoy 你好,我把自己yolov3模型生成的so和nb文件替换进demo app里运行,出现画面就立即卡住了,过一会闪退

卡住时运行终端显示的信息是:
E/linker: library “/system/vendor/lib/libGAL.so” ("/vendor/lib/libGAL.so") needed or dlopened by “/data/app/com.khadas.npudemo-XY8hNYAHZMZtNmUlk3xdIA==/base.apk!/lib/armeabi-v7a/libGAL.so” is not accessible for the namespace: [name=“classloader-namespace”, ld_library_paths="", default_library_paths="/data/app/com.khadas.npudemo-XY8hNYAHZMZtNmUlk3xdIA==/lib/arm:/data/app/com.khadas.npudemo-XY8hNYAHZMZtNmUlk3xdIA==/base.apk!/lib/armeabi-v7a", permitted_paths="/data:/mnt/expand:/data/data/com.khadas.npudemo"]
I/gralloc: ddebug, pair (share_fd=95, user_hnd=d, ion_client=64)


绿线下面是闪退后出现的

附1:我用的是今天从github上新下载的khadas_android_npu_app项目,没替换nb和so之前是能正常运行的。
附2:报错中的libGAL.so在项目里存在

(我在另一个问题下问过编译so库那个改版本的问题,将vnn_yolov3.h中的VNN_VERSION_PATCH 改回26 ,编译成功了,但现在这个报错不知道是不是因为生成的so库有问题

如果之前的so 是正常的,那说明是so的问题,你重新编译库修改了什么东西?

是按 上面操作步骤 将生成case代码里的文件拿来进行替换以及修改

还有编译不成功,将vnn_yolov3.h中的VNN_VERSION_PATCH 改成26了,我自己生成case代码里vnn_yolov3.h的VNN_VERSION_PATCH是30(library里原本是26,但上面操作步骤里让替换vnn_yolov3.h的)。之后so库编译成功了
Android导入yolov3中:NDK编译JNI so库出错 求助 - VIM3 - Khadas Community

换了个模型操作,没有画面卡住和闪退问题了

@goenjoy 你好,按您的步骤操作,那置信度是设置的多少呢?这个置信度我可以在哪里调吗