I plugged the generated .so and .nb files to run an inference using the same script as inceptionv3.py python3 inceptionv3.py --model ./models/VIM3/resnet50_v1.nb --library ./libs/resnet50_v1_libnn.so --picture ./data/goldfish_224x224.jpg --level 0
But the accuracy doesn’t seem to be anywhere near the actual ones
Result from inception v3:
----- Show Top5 ±----
2: 0.93408
795: 0.00307
974: 0.00180
408: 0.00169
393: 0.00148
Result from resnet50:
----- Show Top5 ±----
644: 0.03381
783: 0.02422
418: 0.02138
845: 0.01886
111: 0.01665
I was hoping to benchmark a whole range of networks on the VIM3 board using tensorflow. Sticking to a single framework could help getting an uniformity over the experiments. Additionally, say I get resnet18 from pytorch. I’d still need resnet50, 101, 150… in the future
Could you please help me with any steps (apart from the convert script) that I need to execute to get a model with proper accuracy?
@johndoe After the model is converted, it is the same, no matter which platform is the same, so you can refer to the post-processing of the pytorch demo for the same
The outputs were still not consistent. I noticed another issue in this inference (299x299). The prediction results kept changing with every successive run (without any changes in code/model). I’m attaching the screenshot for it
Here’s the result for 224x224 image:
----Resnet18----
-----TOP 5-----
[829 905]: 0.0014902635011821985
[829 905]: 0.0014902635011821985
[491]: 0.0010400540195405483
[600]: 0.0010229613399133086
[557]: 0.0010146443964913487
Here’s what it should have been:
----Resnet18----
-----TOP 5-----
[1]: 0.991869330406189
[963]: 0.0015490282094106078
[923]: 0.0009275339543819427
[115]: 0.0006153833237476647
[112]: 0.0005012493929825723
I tried that too. The differences between your onnx model and my frozen resnet model aren’t that high (other than naming conventions followed by each of the frameworks). Would you want me to attach the netron outputs for both of them>
Could you please try converting the resnet_v2_50 model from tf_slim (assuming that’s where you got the other models too) into .nb & .so files using the ksnn converter? Let me know the accuracy results too in case you’re able to run an inference after that