Demonstration effect (On VIM3)
How to Run
Clone the github-directory:
$ git clone https://github.com/khadas/ksnn.git
$cd ksnn/examples/caffe
Plug in your webcam, and find it’s device number using the command:
ls /dev/video*
If you have multiple devices and are unsure which is your webcam, compare the command output between the unplugged and plugged-in states.
Single Person Demo
Replace ‘X’ with your webcam’s device number, e.g. 1
$ python3 openpose-signle-cap.py --model ./models/VIM3/openpose.nb --library libs/libnn_openpose.so --device X --level 0
$ python3 openpose-signle-picture.py --model ./models/VIM3/openpose.nb --library libs/libnn_openpose.so --picture data/person.jpg --level 0
Multiple Persons Demo
Replace ‘X’ with your webcam’s device number, e.g. 1
$ python3 openpose-multi-cap.py --model ./models/VIM3/openpose.nb --library libs/libnn_openpose.so --device X --level 0
$ python3 openpose-multi-picture.py --model ./models/VIM3/openpose.nb --library libs/libnn_openpose.so --picture data/person.jpg --level 0
Convert Parameters
$ ./convert \
--model-name openpose \
--platform caffe \
--model pose_deploy_linevec.prototxt \
--weights pose_iter_440000.caffemodel \
--mean-values '0,0,0,256' \
--quantized-dtype asymmetric_affine \
--kboard VIM3 --print-level 1
Model
The official original model needs to be modified before conversion, here is the modified model
https://gitlab.com/yan518/models-zoo-big/-/blob/master/openpose-368.caffemodel
Note
- The model is the original model and needs to be optimized.
- Maybe use AlphaPose instead.