Performance issues with resnet

@johndoe Can you try my reset50 model, although it is onnx, I think it can help you determine where the problem is

Yes. I tried your onnx model. Gives correct accuracy results when run with the inference. I still couldn’t understand the error behind it

How did you get this model? I mean, what steps did you have to follow to finally get the frozen file (before conversion)?

@johndoe I got the model from github, you can compare the difference with your model through https://netron.app

I tried that too. The differences between your onnx model and my frozen resnet model aren’t that high (other than naming conventions followed by each of the frameworks). Would you want me to attach the netron outputs for both of them>

Could you please try converting the resnet_v2_50 model from tf_slim (assuming that’s where you got the other models too) into .nb & .so files using the ksnn converter? Let me know the accuracy results too in case you’re able to run an inference after that

@johndoe I will find time to test, but not so fast

Sure. Thanks a lot!

Meanwhile, we can try to figure out the issue

Here are the netron outputs for both (tf_slim and onnx) frozen files

Onnx

tf_slim

@Frank While you’re working on the tf_slim resnet_v2_50 issue, could you please mention me the steps that you followed to get the resnet50.onnx model?

@johndoe I got it from github, you can search directly on github

@Frank
I tried converting resnet50 from onnx model zoo and it gave good results

 |--- KSNN Version: v1.0 +---|
Done. inference time:  0.02891254425048828
----Resnet50----
-----TOP 5-----
[1]: 0.9986613392829895
[0]: 0.0005297655006870627
[794]: 0.00040659261867403984
[29]: 6.378102261805907e-05
[391]: 5.587655687122606e-05

But when I use resnet18 from onnx model zoo and try to run it, it gives somewhat unexpected values

 |---+ KSNN Version: v1.0 +---|
Done. inference : 0.010450601577758789 s
----Resnet18----
-----TOP 5-----
[1]: 0.8268496990203857
[121]: 0.03389100730419159
[927]: 0.030109353363513947
[963]: 0.023764867335557938
[928]: 0.014804825186729431

Whereas, pytorch’s pretrained model (resnet18) shows better scores

 |---+ KSNN Version: v1.0 +---|
Done. inference : 0.008417129516601562 s
----Resnet18----
-----TOP 5-----
[1]: 0.991869330406189
[963]: 0.0015490282094106078
[923]: 0.0009275339543819427
[115]: 0.0006153833237476647
[112]: 0.0005012493929825723

Is this behaviour explainable? Or is there any way to know which framework’s pretrained models would work best after conversion, beforehand?

@johndoe This is mainly related to training, there is no good way to know in advance which model has the highest accuracy

Got it.
Can you please share with me the source of pytorch’s resnet18.pt file?

@johndoe You can follow all the model in my github which used for KSNN.

@Frank I meant to ask for the source of this file. Was this obtained after you trained on your own? Or did you get it from the vision/some other repository>

@johndoe This is the model I trained before. The official pytorch model can be used. I trained on this basis.

@Frank
The official pytorch model is a .pth file and the conversion script fails while dealing with it

@johndoe You can open it and save it as a pt file. The official pytorch document has this description

@Frank Got it
Any progress on the tf_slim resnet model?