Application source code runtime error "Segmentation fault" please help

Hello @chaneo ,

I understand your problem.

Demo inference has five steps, get input, preprocess, model inference, postprocess, decode and draw.

Before this problem, all results are right and you do not do other changes expect RTSP input and decoding. However, from your draw_result code, it is right.

From my experience, if i meat this problem, i will save a picture with wrong result. Then infer this picture on PC with original model to make sure model if model is right. Second, infer this picture on VIM3. In general, it has the same problem. If not, it means picture input and camera input is not the same. And then analyze results and model output to check if decoding is right. Proprocess and postprocess is right. Because if one of them have problem, all results will be incorrect.

Two thing may help you to check.

inference.sh
In convert tool, it can infer first picture in dateset.txt and save input and output.
Run 0_import_model.sh and 1_quantize_model.sh, and then run inference.sh.

NN Tool FAQ (0.5).pdf section 4.2
This doc is in aml_npu_sdk/docs/en. vsi_nn_SaveTensorToTextByFp32 should add in aml_npu_app/detect_library/model_code/detect_yolov8n/yolov8n.c. It is the api for save model origin input and output in txt.

We do not have RTSP camera, so we can not try to reproduce your problem. Above all hopefully will help you.

Hello @Louis-Cheng-Liu ,

thank you very much for your detailed explanation and response.

I tested it as you suggested.

As you said, as a result of inferring from the picture on PC and Vim3, the box was drawn normally on PC, but the same problem still occurred on Vim3. So is this a decoding issue? Any idea why I’m having decoding issues? Is there a problem with importing the video as rtsp? I wonder if I have to get it only with a USB cam or mipi cam.

Thank you.

Hello @chaneo ,

The USB camera has not this problem? However, the picture has the problem? That is so confusing.

Could you provide the problem picture? I try to run on my VIM3.

Hello @Louis-Cheng-Liu ,

I don’t have a USB camera, so I couldn’t demo it with a USB camera. I’m really sorry, but I can’t give you any pictures due to copyright issues. Could it be that something went wrong with my model conversion?

ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ

0_import_model.sh

NAME=yolov3-tiny
ACUITY_PATH=../bin/
pegasus=${ACUITY_PATH}pegasus
if [ ! -e "$pegasus" ]; then
pegasus=${ACUITY_PATH}pegasus.py
fi
#Darknet
$pegasus import darknet\
	--model  ${NAME}.cfg \
	--weights  ${NAME}.weights \
	--output-model ${NAME}.json \
	--output-data ${NAME}.data \
$pegasus generate inputmeta \
	--model ${NAME}.json \
	--input-meta-output ${NAME}_inputmeta.yml \
	--channel-mean-value "0 0 0 0.003906"  \
	--source-file dataset.txt

1_quantize_model.sh

NAME=yolov3-tiny
ACUITY_PATH=../bin/
pegasus=${ACUITY_PATH}pegasus
if [ ! -e "$pegasus" ]; then
    pegasus=${ACUITY_PATH}pegasus.py
fi
#--quantizer asymmetric_affine --qtype  uint8
#--quantizer dynamic_fixed_point  --qtype int8(int16,note s905d3 not support int16 quantize) 
#--quantizer perchannel_symmetric_affine --qtype int8(int16, note only T3(0xBE) can support perchannel quantize)
$pegasus  quantize \
	--quantizer dynamic_fixed_point \
	--qtype int8 \
	--rebuild \
	--with-input-meta  ${NAME}_inputmeta.yml \
	--model  ${NAME}.json \
	--model-data  ${NAME}.data

2_export_case_code.sh

NAME=yolov3-tiny
ACUITY_PATH=../bin/
pegasus=$ACUITY_PATH/pegasus
if [ ! -e "$pegasus" ]; then
    pegasus=$ACUITY_PATH/pegasus.py
fi
$pegasus export ovxlib\
    --model ${NAME}.json \
    --model-data ${NAME}.data \
    --model-quantize ${NAME}.quantize \
    --with-input-meta ${NAME}_inputmeta.yml \
    --dtype quantized \
    --optimize VIPNANOQI_PID0X88  \
    --viv-sdk ${ACUITY_PATH}vcmdtools \
    --pack-nbg-unify
rm -rf ${NAME}_nbg_unify
mv ../*_nbg_unify ${NAME}_nbg_unify
cd ${NAME}_nbg_unify
mv network_binary.nb ${NAME}.nb
cd ..
#save normal case demo export.data 
mkdir -p ${NAME}_normal_case_demo
mv  *.h *.c .project .cproject *.vcxproj BUILD *.linux *.export.data ${NAME}_normal_case_demo
 delete normal_case demo source
#rm  *.h *.c .project .cproject *.vcxproj  BUILD *.linux *.export.data
rm *.data *.quantize *.json *_inputmeta.yml

Hello @chaneo ,

There is nothing wrong.

hello @Louis-Cheng-Liu ,

long time no see. how have you been doing?

I still haven’t solved this problem. So, if I send you the cfg file, weight file, and demo video, can you test it? Please. thank you

Hello @chaneo ,

I will try my best to solve it.

We will update VIM3 New Demo and update documents in next two weeks. At that time, you can try to use any other model to detect vehicle, like YOLOv8. You can also choose to wait for it.

If you do not want to other get your model and video, you can send a message for me.