Khadas convert VIM3 pytorch

Hello I want to convert my pytorch model to be able to use with khadas NPU. I’m getting this error:

RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 1. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:131)

Apparently the error comes because the library torch used to load the model is too old compared to the one used to create the model. The library torch to load model is:

frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fc8aa130273 in /home/glema/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1e9a (0x7fc8ad0bf00a in /home/glema/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

Is it possible to upgrade the library inside the repo to support newer versions of pytorch?
What version of Pytorch should I use to create the model, to be able to use your library when converting it?

Thanks,
Gabriel

@Gabriel_Lema I suggest you convert the model to onnx and then use the conversion tool. The pytorch version of this tool is too low to be applicable.

Thanks for your answer. After I convert it to onnx, I was able to run the convert script with no issues. However, I don’t see any outputs in the folder I’m running the command.

Here is the model in case it helps you debug the issue: ssd-mobilenet.onnx - Google Drive

And this is the command I’m running:

./convert --model-name mobilenet_ssd --platform onnx --kboard VIM3 --model ./ssd-mobilenet.onnx --quantized-dtype asymmetric_afine --mean-values 127,127,127 --input-size-list 3,300,300 --inputs input_0 --outputs “‘scores boxes’” --print-level 1 --std-values 128

And this is the complete output: output.log - Google Drive

Thanks,
Gabriel

@Gabriel_Lema I will test it today

@Gabriel_Lema Try my command

./convert --model-name mobilenet_ssd --platform onnx --kboard VIM3 --model ~/Downloads/ssd-mobilenet.onnx --quantized-dtype asymmetric_affine --mean-values '127.5,127.5,127.5,127.5' --print-level 1

Misspell. And some parameters are not needed

Thanks for you reply, I was able to convert the model, but I’m not getting the desired output.

This is what I use to run the onnx model:

ort_sess = ort.InferenceSession(‘/home/khadas/ssd-mobilenet.onnx’)
img = cv.resize(img[:, :, ::-1], (300, 300)).transpose(2, 0, 1)
img = (img - 127)/128.
img = img.astype(‘float32’)
img = img[np.newaxis]
scores, boxes = ort_sess.run(None, {‘input_0’ : img})

What should I do to run this using the ksnn library?

I’m trying

outputs = ssd.nn_inference(img, platform=’ ONNX’, output_tensor=2, reorder=‘2, 1, 0’, output_format=output_format.OUT_FROMAT_FLOAT32)

But the results are not the ones I expected. I even tried preprocessing the image the same way as I did with the onnx model.

Given that I use those mean and std values, should I use those in the ./convert script?

Thanks for your help!

@Gabriel_Lema The obtained data is the original data and needs to be processed again. KSNN has a demo of SSD, but it is different from your model, maybe that can help you

Yes I meant before the NMS part. But now it is working correctly :slight_smile:
I converted it with the mean and std expected in the preprocessing, and then in the nn inference I don’t preprocess it, and just do

img = cv2.imread(fn_img)[:, :, ::-1]
scores, boxes = ssd.nn_inference(img, platform=’ ONNX’, output_tensor=2, reorder=‘2, 1, 0’, output_format=output_format.OUT_FROMAT_FLOAT32)

Then I run a nms function (not the one provided)
Thanks

1 Like