Generating case code for pytorch model

I tried to run 0_import_model.sh for pretrained alexnet model but it is showning errors.
2022-01-22 17:05:18.096721: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib
2022-01-22 17:05:18.096749: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I Start importing pytorch…
Traceback (most recent call last):
File “convertpytorch.py”, line 96, in
File “convertpytorch.py”, line 86, in main
File “acuitylib/vsi_nn.py”, line 225, in load_pytorch_by_onnx_backend
File “acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_frontend.py”, line 45, in init
File “torch/jit/init.py”, line 162, in load
RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 1. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:131)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7ff7b3085273 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1e9a (0x7ff7b601400a in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x60 (0x7ff7b6015270 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)
frame #3: torch::jit::import_ir_module(std::shared_ptrtorch::jit::script::CompilationUnit, std::string const&, c10::optionalc10::Device, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >&) + 0x38 (0x7ff7b70f4088 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)
frame #4: + 0x4d6abc (0x7ff7fd3f5abc in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)
frame #5: + 0x1d3f04 (0x7ff7fd0f2f04 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)

frame #26: …/bin/convertpytorch() [0x402ca1]
frame #27: …/bin/convertpytorch() [0x403087]
frame #28: __libc_start_main + 0xe7 (0x7ff84788bbf7 in /lib/x86_64-linux-gnu/libc.so.6)
frame #29: …/bin/convertpytorch() [0x401a9e]

[12808] Failed to execute script convertpytorch

Model Link: alexnet.pt - Google Drive

or for any other nework 0_modle script is not working

from tensorflow.keras.applications.resnet50 import ResNet50
model = ResNet50(weights=‘imagenet’)
model.save(“resnet50_keras.hdf5”)

@Omkar_Shende This is caused by different Pytorch versions. It is recommended to install onnx first, and then convert

it is installed
torch==1.2.0
onnx==1.6.0

@Omkar_Shende I mean, I suggest you to convert the pytorch model to onnx model first, and then convert it through the conversion tool

Sry i just want to be clear ,
should i convert to onnx and save as .pt
torch.onnx.export(resnet, inp_batch,“resnet18.pt”,export_params=True, opset_version=10) and then use pytorch convert script
$convert_pytorch --pytorch-model xxxx.pt
–net-output ${NAME}.json
–data-output ${NAME}.data
–input-size-list '1,480,854

or save it as .onnx torch.onnx.export(resnet, inp_batch,“resnet18.onnx”,export_params=True, opset_version=10) and use onnx convert script ???#$convert_onnx
–onnx-model xxx.onnx
–net-output ${NAME}.json
–data-output ${NAME}.data

This is the true way

But with this method results are not accurate…
— Top5 —
599: 9.577607
904: 9.577607
497: 8.254385
906: 7.876322
828: 7.624279
Exit VX Thread: 0x8d1581b0

Script ()
convert_onnx=${ACUITY_PATH}convertonnx

$convert_onnx
–onnx-model ./Models/${NAME}.onnx
–net-output ${NAME}.json
–data-output ${NAME}.data

tensorzone=${ACUITY_PATH}tensorzonex

$tensorzone
–action quantization
–dtype float32
–source text
–source-file data/validation_tf.txt
–channel-mean-value ‘0.485 0.456 0.406 1’
–reorder-channel ‘0 1 2’
–model-input ${NAME}.json
–model-data ${NAME}.data
–quantized-dtype asymmetric_affine-u8
–quantized-rebuild \

–batch-size 2 \

–epochs 5

export_ovxlib=${ACUITY_PATH}ovxgenerator

$export_ovxlib
–model-input ${NAME}.json
–data-input ${NAME}.data
–reorder-channel ‘0 1 2’
–channel-mean-value ‘0.485 0.456 0.406 1’
–export-dtype quantized
–model-quantize ${NAME}.quantize
–optimize VIPNANOQI_PID0X88
–viv-sdk ${ACUITY_PATH}vcmdtools
–pack-nbg-unify \

There is Accuracy issue with Keras model also , for Mobilenet v2 preprocessing is done with -1,1
so channel mean values are 128,128,128,128 according to on post on khadas forum.
script:

#!/bin/bash

NAME=mobilenet_v2
ACUITY_PATH=/home/user/khadas/aml_npu_sdk/acuity-toolkit/bin/

convert_caffe=${ACUITY_PATH}convertcaffe
convert_tf=${ACUITY_PATH}convertensorflow
convert_tflite=${ACUITY_PATH}convertflit
convert_darknet=${ACUITY_PATH}convertdarknet
convert_onnx=${ACUITY_PATH}convertonnx
convert_keras=${ACUITY_PATH}convertkeras
convert_pytorch=${ACUITY_PATH}convertpytorch


$convert_keras \
	--keras-model ./Models/${NAME}.hdf5 \
        --outputs Logits \
	--net-output ${NAME}.json \
        --data-output ${NAME}.data

tensorzone=${ACUITY_PATH}tensorzonex

$tensorzone \
    --action quantization \
    --dtype float32 \
    --source text \
    --source-file data/validation_tf.txt \
   --channel-mean-value '128 128 128 128' \
    --reorder-channel '0 1 2' \
    --model-input ${NAME}.json \
   --model-data ${NAME}.data \
   --quantized-dtype asymmetric_affine-u8 \
   --quantized-rebuild \
#    --batch-size 2 \
#    --epochs 5
export_ovxlib=${ACUITY_PATH}ovxgenerator

$export_ovxlib \
    --model-input ${NAME}.json \
    --data-input ${NAME}.data \
    --reorder-channel '0 1 2' \
    --channel-mean-value '128 128 128 128' \
    --export-dtype quantized \
    --model-quantize ${NAME}.quantize \
    --optimize VIPNANOQI_PID0X88  \
    --viv-sdk ${ACUITY_PATH}vcmdtools \
    --pack-nbg-unify  \

rm  *.h *.c .project .cproject *.vcxproj *.lib BUILD *.linux *.data *.quantize *.json
mv ../*_nbg_unify nbg_unify_${NAME}
cd nbg_unify_${NAME}
mv network_binary.nb ${NAME}.nb

@Omkar_Shende

You need to make adjustments according to the actual situation of your model. 128 is just a relatively general value. For example, in my keras model, I use 4 127.5, which is also normalized to [-1,1]