NPU Mobilenet SSD v2 demo and source code

@larrylart

I tried your code and it works on my VIM3 board.
The speed is about 65frames/sec, ie, 15ms/frame.
Do you plan to convert ADAS model, like lane detection to evaluate the performance because your github mentions about ADAS?

Thanks,

hi larry ,Thanks for sharing,
i tried and i converted tflite project (nbg_unity_mobileSSD),Can i build this source on the x64 ubuntu ?

I built it on the VIM3 itself. I’m guessing you can build it on x64 as well if you use fenix/ see the npu demo sample.

I might, I started the project with the idea to make a rear drive camera with an led matrix display made out of 4 x MAX7219 Dot Matrix Module 4-in-1 to warn drivers tailgating of distance/speed and for that an easy way to measure approximate distance will be detect car and calculate distance using lens focal/camera sensor size. For that I would only need a detector say for various vehicles class sizes, and perhaps cyclists/motorcycles. Then on the same note I thought why not use a forward system as well with a larger class pedestrians etc. and connect the two over gigabit network so the two can share intelligence, say for example forward system can pass info on hazards ahead and display the info on the led display. Will see, it’s work in progress, I started working on a system with xu4 and two Intel movidius ncs to barely get 16 FPS, then I got the Gyrfalcon 2801 to jetson and coral and now VIM which seems promising and even better if they solve a few bits, like the pcie m.2 :slight_smile: Or even better if they make an M2X Extension Board specialized for robotics similar to Qualcomm Robotics RB3
with gps/accel/gyro/etc external battery and/or supercaps for nice shutdown.

1 Like

Lary i didnt compile your makefile code on Vim3 :frowning:
i installed from source opencv 4.1 ,its okey.
But i cant connect library header file on the Makefile
they says root@Khadas:/home/khadas/Desktop/Folder/aml_npu_sdk/linux_sdk/demo/vim3# make aarch64-linux-gnu-g++ -o obj/aml_obj_detect.o -O3 --std=c++11 -mcpu=cortex-a73 -funsafe-math-optimizations -ftree-vectorize -fPIC -I/usr/local/include/opencv4/opencv -I/usr/local/include/opencv4 -c aml_obj_detect.cpp In file included from include/ovxlib/vsi_nn_context.h:27:0, from include/ovxlib/vsi_nn_pub.h:13, from aml_worker.h:25, from aml_obj_detect.cpp:18: include/ovxlib/vsi_nn_platform.h:27:10: fatal error: VX/vx_khr_cnn.h: No such file or directory #include <VX/vx_khr_cnn.h> ^~~~~~~~~~~~~~~~~ compilation terminated. Makefile:61: recipe for target 'obj/aml_obj_detect.o' failed make: *** [obj/aml_obj_detect.o] Error 1

It looks like you are missing the AML sdk path, edit Makefile and at the top set the path to sdk AML_SDK_PATH=

my file and path looks like ,how can i do ?
/home/khadas/Desktop/Folder/aml_npu_sdk/
İs this can be problem path??
i changed many thinks,but every time i got error ,normaly i use -mcpu=a73 for Aarch64 .this image is wrong

My bad, create a folder named obj in the vim3_npu/. It’s temp folder for .obj compiler files

2 Likes

Not problem :grinning: thanks for your helping, you are brillliant people, tomorrow i Will add empty Folder. Every body make a little mistake,

2 Likes

Thanks for sharing your plan.
My plan is to use VIM3 and tourchscreen to emulate commaai-openpilot (open source code in github) without car controlling part. So it can do lane detection and departure warning.

1 Like

good New its done.on vim3 :smiley: thanks again larry

1 Like

Larry, thanks for sharing this repo. I am trying to convert mobile net ssd (v1) and failing. this it what I run:

…/bin/convertensorflow --tf-pb /media/omer/DATA1/Data/10_classes_300X300/checkpoint/out/tflite_graph.pb --inputs normalized_input_image_tensor --input-size-list ‘300,300,3’ --outputs ‘raw_outputs/box_encodings concat_1’ --net-output /media/omer/DATA1/Data/10_classes_300X300/checkpoint/out_aml/mobilenet_ssd.json --data-output /media/omer/DATA1/Data/10_classes_300X300/checkpoint/out_aml/mobilenet_ssd.data

And this is what I get:



Fold/bias:out0’, ‘FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/mul_fold:out0’, ‘FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_pointwise/mul_fold:out0’, ‘FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_32/mul_fold:out0’]
2019-12-03 17:57:30.881831: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
I Convert To TFLite to Import quantized model.
Traceback (most recent call last):
File “convertensorflow.py”, line 62, in
File “convertensorflow.py”, line 58, in main
File “acuitylib/app/importer/import_tensorflow.py”, line 125, in run
IndexError: list index out of range
[18365] Failed to execute script convertensorflow

Any ideas?
Thanks
Omer

omer how can you make tflite_graph ?
did you use export_inference.py or export_tflite_graph.py script ?

I used the export_tflite_ssd_graph.py as explained in the repo readme, it actually creates a .pb file and not a .tflite file, but that seems to follow the steps as I understood them

Maybe the issue is becuase the model is lready qunatizied? should I export a non-quatizied model instead?

you looks on the rigth way,maybe this sample help you

~/tensorflow/models$ python3 research/object_detection/export_tflite_ssd_graph.py --pipeline_config_path=research/object_detection/test_data/pipeline.config --trained_checkpoint_prefix research/object_detection/test_data/model.ckpt-93313 --output_directory train/fortflite/ --add_postprocessing_op=true

Hi Omer, this is the issue “TensorFlow binary was not compiled to use: AVX2 FMA”
I had the same problem initially was due to fact that my x86 ubuntu was running on an old processor which didn’t support avx2 and Acuity tensorflow compiled binary requires that. I installed ubuntu on a virtual machine on my desktop, which has a newer processor and it worked.
What processor do you use?
See here for more details on AVX here https://stackoverflow.com/questions/47068709/your-cpu-supports-instructions-that-this-tensorflow-binary-was-not-compiled-to-u

1 Like

i have same problem.I tried many conversion.Every time its make a folder.when i compiled on vim3 everthing files maded.
when i converted inception model v3 ,its worked
But i converted mobilenetSsd v1 and mobiilessdV2 ,all files maded and converted.
this warning message is here.
and when i open aplication,they result wrong, i inspected all code .And i finded my out levels (asymetric quantizition) have a problem
my cpu i7 6700 and ubuntu x64 18.04 gpu gtx 960m and i use cuda 10.0

larry when i was converted mobilessnet ,in script "./1_quantize_model"
i used this parameter -source-file ./data/validation_tf.txt
and my text = ./06AF5418._9_9_.JPG, 208
this can be problem,because everthing working but out results are too small

Thanks Larry, I did notice the message but ignored it since it’s only an info message and online reading lead me to believe it’s mostly to do with performance. I also noticed your comment on the cpu model, but since I am running with Intel I7 I thought it is modern enough. Anyway thanks so much for the input, I will try to convert with a different machine.
EDIT: I just verified that indeed for me the problem was not with the cpu , but rather with trying to convert a quntizied model. I was able to convert a model that was not quantizied on training.

1 Like