I tried to run 0_import_model.sh for pretrained alexnet model but it is showning errors.
2022-01-22 17:05:18.096721: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib
2022-01-22 17:05:18.096749: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I Start importing pytorch…
Traceback (most recent call last):
File “convertpytorch.py”, line 96, in
File “convertpytorch.py”, line 86, in main
File “acuitylib/vsi_nn.py”, line 225, in load_pytorch_by_onnx_backend
File “acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_frontend.py”, line 45, in init
File “torch/jit/init.py”, line 162, in load
RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 1. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:131)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7ff7b3085273 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1e9a (0x7ff7b601400a in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x60 (0x7ff7b6015270 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)
frame #3: torch::jit::import_ir_module(std::shared_ptrtorch::jit::script::CompilationUnit, std::string const&, c10::optionalc10::Device, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >&) + 0x38 (0x7ff7b70f4088 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)
frame #4: + 0x4d6abc (0x7ff7fd3f5abc in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)
frame #5: + 0x1d3f04 (0x7ff7fd0f2f04 in /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)
Sry i just want to be clear ,
should i convert to onnx and save as .pt
torch.onnx.export(resnet, inp_batch,“resnet18.pt”,export_params=True, opset_version=10) and then use pytorch convert script
$convert_pytorch --pytorch-model xxxx.pt
–net-output ${NAME}.json
–data-output ${NAME}.data
–input-size-list '1,480,854
or save it as .onnx torch.onnx.export(resnet, inp_batch,“resnet18.onnx”,export_params=True, opset_version=10) and use onnx convert script ???#$convert_onnx
–onnx-model xxx.onnx
–net-output ${NAME}.json
–data-output ${NAME}.data
But with this method results are not accurate…
— Top5 —
599: 9.577607
904: 9.577607
497: 8.254385
906: 7.876322
828: 7.624279
Exit VX Thread: 0x8d1581b0
There is Accuracy issue with Keras model also , for Mobilenet v2 preprocessing is done with -1,1
so channel mean values are 128,128,128,128 according to on post on khadas forum.
script:
You need to make adjustments according to the actual situation of your model. 128 is just a relatively general value. For example, in my keras model, I use 4 127.5, which is also normalized to [-1,1]