PyTorch model conversion

I’m unable to convert a model in pytorch

This is the script to download the pretrained weights

import torch
import torchvision.models as models
resnet18 = models.resnet18(pretrained=True)

torch.save(resnet18, 'resnet18.pt')

This is the script for converting

./convert 
--model-name resnet18 
--platform pytorch 
--model resnet18.pt 
--input-size-list '3,224,224' 
--mean-values '103.94,116.78,123.68,58.82' 
--quantized-dtype asymmetric_affine 
--kboard VIM3 --print-level 1

Here’s the error:

Start import model ...
I Namespace(config=None, import='pytorch', input_size_list='3,224,224', inputs=None, model='resnet18.pt', output_data='resnet18.data', output_model='resnet18.json', outputs=None, size_with_batch=None, which='import')

I Start importing pytorch...

[50514] Failed to execute script pegasus

Traceback (most recent call last):

  File "pegasus.py", line 131, in <module>

  File "pegasus.py", line 112, in main

  File "acuitylib/app/importer/commands.py", line 286, in execute

  File "acuitylib/vsi_nn.py", line 173, in load_pytorch_by_onnx_backend

  File "torch/jit/__init__.py", line 162, in load

RuntimeError: [enforce fail at inline_container.cc:137] . PytorchStreamReader failed reading zip archive: failed finding central directory

frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x47 (0x7fe3f02eee17 in /home/saswat/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libc10.so)

frame #1: caffe2::serialize::PyTorchStreamReader::valid(char const*) + 0x6b (0x7fe3f327775b in /home/saswat/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

frame #2: caffe2::serialize::PyTorchStreamReader::init() + 0x9a (0x7fe3f327b20a in /home/saswat/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

frame #3: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x60 (0x7fe3f327e270 in /home/saswat/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

frame #4: torch::jit::import_ir_module(std::shared_ptr<torch::jit::script::CompilationUnit>, std::string const&, c10::optional<c10::Device>, std::unordered_map<std::string, std::string, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, std::string> > >&) + 0x38 (0x7fe3f435d088 in /home/saswat/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

frame #5: <unknown function> + 0x4d6abc (0x7fe43a65eabc in /home/saswat/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)

frame #6: <unknown function> + 0x1d3f04 (0x7fe43a35bf04 in /home/saswat/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)

<omitting python frames>

frame #18: ../bin/pegasus() [0x402ca1]

frame #19: ../bin/pegasus() [0x403087]

frame #20: <unknown function> + 0x2dfd0 (0x7fe4721e4fd0 in /lib/x86_64-linux-gnu/libc.so.6)

frame #21: __libc_start_main + 0x7d (0x7fe4721e507d in /lib/x86_64-linux-gnu/libc.so.6)

frame #22: ../bin/pegasus() [0x401a9e]

My pytorch version is 1.2.0. I’ve tried with 1.4 and 1.10 but the same error occurs in each case

Facing a new error now

Start import model ...
I Namespace(config=None, import='pytorch', input_size_list='3,224,224', inputs=None, model='resnet18.pt', output_data='resnet18.data', output_model='resnet18.json', outputs=None, size_with_batch=None, which='import')

I Start importing pytorch...

[6041] Failed to execute script pegasus

Traceback (most recent call last):

  File "pegasus.py", line 131, in <module>

  File "pegasus.py", line 112, in main

  File "acuitylib/app/importer/commands.py", line 286, in execute

  File "acuitylib/vsi_nn.py", line 173, in load_pytorch_by_onnx_backend

  File "torch/jit/__init__.py", line 162, in load

RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 1. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:131)

frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f8946b74273 in /home/saswatsubhajyoti_mallick_zeotap_/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libc10.so)

frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1e9a (0x7f8949b0300a in /home/saswatsubhajyoti_mallick_zeotap_/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x60 (0x7f8949b04270 in /home/saswatsubhajyoti_mallick_zeotap_/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

frame #3: torch::jit::import_ir_module(std::shared_ptr<torch::jit::script::CompilationUnit>, std::string const&, c10::optional<c10::Device>, std::unordered_map<std::string, std::string, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, std::string> > >&) + 0x38 (0x7f894abe3088 in /home/saswatsubhajyoti_mallick_zeotap_/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch.so)

frame #4: <unknown function> + 0x4d6abc (0x7f8990ee4abc in /home/saswatsubhajyoti_mallick_zeotap_/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)

frame #5: <unknown function> + 0x1d3f04 (0x7f8990be1f04 in /home/saswatsubhajyoti_mallick_zeotap_/khadas/aml_npu_sdk/acuity-toolkit/bin/acuitylib/libtorch_python.so)

<omitting python frames>

frame #17: ../bin/pegasus() [0x402ca1]

frame #18: ../bin/pegasus() [0x403087]

frame #19: __libc_start_main + 0xe7 (0x7f89c04e5bf7 in /lib/x86_64-linux-gnu/libc.so.6)

frame #20: ../bin/pegasus() [0x401a9e]

@johndoe Can you share you model to me ? I will try it.

@Frank Sure. Here it is: resnet18.pt - Google Drive

@johndoe I will test this week

1 Like

As a workaround, I’m saving the pretrained models as onnx and then converting them via the ‘convert’ script. The inference results are consistent

there is similar problem with keras pertrained models.
from tensorflow.keras.applications.resnet50 import ResNet50
model = ResNet50(weights=‘imagenet’)
model.save(“resnet50_keras.hdf5”)

0_import

$convert_keras
–keras-model ${NAME}.hdf5
–outputs predictions
–net-output ${NAME}.json
–data-output ${NAME}.data

2022-01-22 20:29:27.585991: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib
2022-01-22 20:29:27.586031: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I Start importing keras…
D Convert Keras with Keras Branch
Traceback (most recent call last):
File “convertkeras.py”, line 49, in
File “convertkeras.py”, line 39, in main
File “acuitylib/vsi_nn.py”, line 239, in load_keras
File “acuitylib/app/importer/import_keras.py”, line 202, in run
File “acuitylib/app/importer/import_keras.py”, line 154, in convert_engine_keras
File “acuitylib/converter/convert_keras.py”, line 24, in init
File “tensorflow/python/keras/saving/save.py”, line 182, in load_model
File “tensorflow/python/keras/saving/hdf5_format.py”, line 178, in load_model_from_hdf5
File “tensorflow/python/keras/saving/model_config.py”, line 55, in model_from_config
File “tensorflow/python/keras/layers/serialization.py”, line 175, in deserialize
File “tensorflow/python/keras/utils/generic_utils.py”, line 358, in deserialize_keras_object
File “tensorflow/python/keras/engine/functional.py”, line 617, in from_config
File “tensorflow/python/keras/engine/functional.py”, line 1204, in reconstruct_from_config
File “tensorflow/python/keras/engine/functional.py”, line 1186, in process_layer
File “tensorflow/python/keras/layers/serialization.py”, line 175, in deserialize
File “tensorflow/python/keras/utils/generic_utils.py”, line 360, in deserialize_keras_object
File “tensorflow/python/keras/engine/base_layer.py”, line 697, in from_config
File “tensorflow/python/keras/layers/pooling.py”, line 848, in init
File “tensorflow/python/training/tracking/base.py”, line 457, in _method_wrapper
File “/home/omkar/Desktop/new/aml_npu_sdk/acuity-toolkit/bin/acuitylib/tensorflow/python/keras/engine/base_layer_v1.py”, line 165, in init
generic_utils.validate_kwargs(kwargs, allowed_kwargs)
File “tensorflow/python/keras/utils/generic_utils.py”, line 778, in validate_kwargs
TypeError: (‘Keyword argument not understood:’, ‘keepdims’)
[20341] Failed to execute script convertkeras
output :