Tengine Convert Segmentation fault (core dumped)!

Hi,
Following the info of https://docs.khadas.com/linux/vim3/TengineSDK.html, when I do this “./convert_tool -f darknet -m ~/yolov3.weights -p ~/yolov3.cfg -o yolov3.tmfile” then crash with the error info as below.

---- Tengine Convert Tool ----

Version : v1.0, 15:43:59 Jun 24 2021
Status : float32
Segmentation fault (core dumped)
My platform is ubuntu 20.04 server. Anything wrong ?

BR

@lcl2020 Tengine can be run with least image. we are solving this problem

hi,Frank
1 “Tengine can be run with least image” , what is the meaning of with least image? where are the least image? Or any version could run most of demos?
2 “we are solving this problem”, any method to solve this problem now? Can you share with us?
3 I do not know whether we should tell verisilcon this problem,because TIM-VX depends on the output of the Tengine convert tool?
BR

@lcl2020 Sorry, my mistake… Tengine can’t be run with least ubuntu image.

You can go to Tengine’s github to submit an issue, and there will be human assistance for you. The latest firmware is missing some libraries and files, so tengine cannot be compiled directly on the board

Our library is not directly derived from verisilcon, but from Amlogic. We are already communicating with amlogic about this issue and it will take time to resolve it. Please wait.

We should fix this next week

1 I have submited a issue to Tengine at github already.
2 I know, yet there is a question puzzled me so much. In my opinion,the TIM-VX is the real offer if NPU IP and sdk,who or which team could communited with verisilcon directly and effiently?
As I know,I tried to submit a issue at TIM-VX repo,no reply or answer till now.
3 Do you have some advice about how to comminuted with verisilcon directly,which way will be the complementary method to solve our problem with the khadas vim3?

@lcl2020 We are docking with Amlogic, not directly with verisilcon. We are using Amlogic optimized code.

I have no idea either. I think you can wait an extra week and we’ll fix this and then Tengie’s community edition will be ready to compile on our board

Looking forward to you reply this week!

hello,Frank
Any good news about this problem with converting tool?
BR

@lcl2020 We will upgrade the NPU drivers version. Hope this can fixup this.

what should I do and when ?

@lcl2020 If you need to slove it now, the only way is to ask Tengine for help. If you are patient enough, you can wait for us to adapt, but this will take a certain amount of time, and I can’t give you an accurate time to reply at the moment

1、 I trust your professional dedication ,Frank,just do it!
2、 Moreover regarding to tengine quantified(int8) tool, are you sure there is the implemention of most of op almost up to 100 different op in tengine?
3、 How many different op tengine can support now except “Convolution” “FullyConnected” || “Deconvolution” and so on (which I just saw these op name in the tengine source coe)?

@lcl2020 You should ask Tengine these two questions

hi,Frank
1, Any good news about the bug mentioned above?
2, I do not know whether vim3 NPU a311d could support int16 quantization (cnn model)?
BR

@lcl2020 I am working with new npu version.

Yes. You can chooise int16 when you convert you own model

When could you fixed this bug ?
How should I do next?

@lcl2020 I am not sure. This requires cooperation with the tengine side.

If you need us to solve this, you need to wait some times

In fact,I am trying to import the newest model to a311d NPU with Tengine converting tool.
I want to improve the code of Tengine ,yet some time needing for me to understand the detailed processing of Tengine.
So do you have any idea for me to grasp the key point of Tengine souce code project?I just want to run the new model on the NPU with Tengine. Just one thing ,how to do it more quickly?
tks

@lcl2020 Reading the code is the only way. Of course, you can read the Tengine documentation before this

1 The vim3 a311d npu could support int8 int16 and fp16 datatype of cnn model ,right?
2 It can support int8 and fp16 mixed quantization ?