I used the python API to convert my model into .nb format. After the model was converted, I could only find .nb file and I could not find .so file in the output/mobilenetv2/ directory.
The parameters, that I set to convert the model are:
@Frank I read the complete document before converting and I did the same.
Yet, I have doubts how to set the input and output parameters as I am new to this.
The below image shows the parameters of the input model. Please correct me if I am wrong.
@Frank Thanks. Now the demo of NNAPI SDK works fine after the new fix.
However, can you please tell me for which model is ‘image_classify_224*224.cpp’ written?
I would like to visualize the architecture of that model and learn how the pre processing and post processing is written.
I think there is no inference time displayed on the logs file in the NN API SDK.
There is nothing mentioned in the docs of aml_npu_sdk.
I am currently working with AMLOGIC NN API.
In the demo after cloning the repo, I could find a image_classify_88.nb file. I just wanted to know which model is this? I know it is a classification model. But, I want to know is it MobileNet v1 or MobileNet v2 or any other architecture?
@Frank I just have one small question. My current OS is Windows 10 and also I am using VM Ware virtual machine.
I have installed visual studio on windows side and if I want to execute code on VIM3, did you install gc cross compiler? I just wanted to know if using cross compiler will be the solution to create executrables on VIM3?
Thanks in advaance!
@Frank Thanks. I got an inference time of 41 milliseconds on NPU. Isn’t this inference a little on a higher side? I had got an inference of 17 milliseconds on Coral TPU for the same model.
Does this mean Coral USB accelerator is faster than VIM3 NPU?
@Akkisony Sorry, I don’t have a coral, so I can’t test it . But in addition to speed, you also need to compare accuracy. If the accuracy is not much different, then according to your results, coral is faster