Which Khadas SBC do you use?
Which system do you use? Android, Ubuntu, OOWOW or others?
I want to convert my yolov5n model to run in Khadas. I converted my onnx model (with aml_npu_sdk/acuity-toolkit/python) by creating environment on my own computer.
I created the dataset0.txt file based on my own images.
I moved the ‘nb’ and ‘so’ files to Khadas. I want to detect using my own model with KSNN repo. When I test the 1080p.bmp image located in the ksnn/example/onnx/data folder, it produces output.
However, when I try to test it on my own image, I get the error ‘segmentation fault’.
can you help me?
August 25, 2022, 12:58am
@ahmet_karazor Segfault, looks like your post-processing isn’t handling the correct array length. Have you confirmed that the length of the array in your post-processing code is the same as the length of the output array of your model?
I am getting “segfault” error in “model.nn_inference” command. Before proceeding with post processing.
Model input size:
Model output size:
image size (w , h) = (640 , 360). I also tried to resize it to be the same as the model input without going into the inference phase. but same error.
August 26, 2022, 10:19am
@ahmet_karazor If you need me to debug, you need to give me your mode and source code .
when i try it on my own onnx model the output should be like below. But it gives segfault error in khadas.
so, nb, onnx, detection code python file and sample image on github link.
August 29, 2022, 12:57am
@ahmet_karazor I will found time to test it.
Thanks. I am waiting your answer.
September 5, 2022, 7:01am
./convert --model-name test --platform onnx --model /home/yan/Downloads/yolox_nano.onnx --mean-values '0 0 0 0.00390625' --quantized-dtype asymmetric_affine --source-files ./data/dataset/dataset0.txt --kboard VIM3 --print-level 1
data = yolov3.nn_inference(cv_img, platform='DARKNET', reorder='2 1 0', output_tensor=1, output_format=output_format.OUT_FORMAT_FLOAT32)
It work for me.
|---+ KSNN Version: v1.3 +---|
Start init neural network ...
Get input data ...
Start inference ...
Done. inference time: 0.15771770477294922