Missing .so file while converting the model using Python API


I used the python API to convert my model into .nb format. After the model was converted, I could only find .nb file and I could not find .so file in the output/mobilenetv2/ directory.

The parameters, that I set to convert the model are:

Please let me know if ‘–tf-outputs’ parameter of mine is set right.

The model: Google Drive: Sign-in

@Akkisony You can follow this Readme

It same with this outputs

@Frank I read the complete document before converting and I did the same.

Yet, I have doubts how to set the input and output parameters as I am new to this.
The below image shows the parameters of the input model. Please correct me if I am wrong.

I have set:
’tf-inputs input’

Am I right? If I am wrong, was it supposed to be:
’tf-inputs mobilenetv2_1.00_224_input’

Similarly for output parameter, I have set the following:

’tf-outputs Softmax’

Is it correct or wrong. If it is wrong, what is the answer? Please help me as I am new to this.

Thanks in advance.

@Akkisony Yes. It should be Softmax. But I think it should have a path . such as v1 is MobilenetV1/Predictions/Softmax

@Frank Is my input parameter correct?

How do I get to know this? I could not visualize using the netron app.

@Akkisony Maybe this one ? MobilenetV2/Predictions/Softmax

I found this in tensorflow github

@Frank Thanks. I still I could not get the .so file during the conversion.

Please can you let me know the reason?

I think the reason is tat the scipt is not able to generate - ‘outputs/nbg_unify_mobilenetv2/bin_r/’ directory.

Because, current directory generated is - ‘outputs/mobilenetv2/’ and in this directory, I can find the .nb file
Any input how can I overcome this?

@Akkisony You need to install the gcc tools. Please follow those steps

$ sudo mkdir -p /opt/toolchains
$ wget https://releases.linaro.org/components/toolchain/binaries/6.3-2017.02/aarch64-linux-gnu/gcc-linaro-6.3.1-2017.02-x86_64_aarch64-linux-gnu.tar.xz -P /tmp
$ sudo tar xJvf /tmp/gcc-linaro-6.3.1-2017.02-x86_64_aarch64-linux-gnu.tar.xz -C /opt/toolchains

then,export export PATH=$PATH:/opt/toolchains/gcc-linaro-6.3.1-2017.02-x86_64_aarch64-linux-gnu/bin to you ~/.bashrc and source ~/.bashrc

Hi, I did not get a solution for this problem. I installed gcc tools and did the steps you told me

Yet, I am getting the same opencv error.

@Akkisony I had push new code to gitlab. you can pull the least code and try again

@Frank Thanks. Now the demo of NNAPI SDK works fine after the new fix.

  1. However, can you please tell me for which model is ‘image_classify_224*224.cpp’ written?
    I would like to visualize the architecture of that model and learn how the pre processing and post processing is written.

  2. I think there is no inference time displayed on the logs file in the NN API SDK.

Thanks for the bug fix again.

@Akkisony You can found the docs in aml_npu_sdk

There is nothing mentioned in the docs of aml_npu_sdk.
I am currently working with AMLOGIC NN API.
In the demo after cloning the repo, I could find a image_classify_88.nb file. I just wanted to know which model is this? I know it is a classification model. But, I want to know is it MobileNet v1 or MobileNet v2 or any other architecture?

@Akkisony It’s mobileNet, but I don’t know the version of it. Whether it is v1 or v2, it does not affect your design of your post-processing

@Frank Thanks. But my model works fine with the existing code. I had nothing to modify at all.

@Frank I just have one small question. My current OS is Windows 10 and also I am using VM Ware virtual machine.
I have installed visual studio on windows side and if I want to execute code on VIM3, did you install gc cross compiler? I just wanted to know if using cross compiler will be the solution to create executrables on VIM3?
Thanks in advaance! :slight_smile:

@Frank Can you please tell me where do we have to measure the inference time in the script image_classify_224*224.cpp ?

I added before and after create network function. Is this the right place to measure the inference time?

And I got result as follows:

khadas@Khadas:~/aml_npu_nnsdk_app/image_classify_224x224/cv4_output$ ./image_classify_224x224 …/tflite.nb …/sunflower-1.jpg
Time taken:91microseconds
3: 0.999512
1: 0.000086
2: 0.000086
4: 0.000086
0: 0.000000

Please let me know your thoughts on this please!

@Akkisony The stop time should be after postpress. And the source code is open , you should read the source code yourself

@Frank Thanks. I got an inference time of 41 milliseconds on NPU. Isn’t this inference a little on a higher side? I had got an inference of 17 milliseconds on Coral TPU for the same model.

Does this mean Coral USB accelerator is faster than VIM3 NPU?

Do you know any reason for this?

@Akkisony Sorry, I don’t have a coral, so I can’t test it . But in addition to speed, you also need to compare accuracy. If the accuracy is not much different, then according to your results, coral is faster