VIM3 AI Benchmarks

I noticed on this page

that there are measurements for both A311D and S922x, but their AI benchmarks are very similar…

Does that mean that there is no support for executing those benchmaks using the A311D NPU, or what is the reason? Or is the NPU just marginally faster than the CPU?

@endian as seen from the title, it says there was no hardware acceleration implemented on the NPU,

if it had been enabled it would have definitely had a higher score :slightly_smiling_face:

1 Like

hello, these are the same processors, only artificial intelligence support was added to the a311d processor, they are identical in tests.

1 Like

Yes, exactly, so I’m wondering how can I run this benchmark tests to improve this table… Or is there some AI driver that is missing to your knowledge…?

I presume this is an android application, since the Amlogic/Verisilicon NPU is not 100% compatible with all model (hence the reason the specific SDK exists)

the best way to comparatively measure the performance is to have the test that actually is compiled natively for the VIM3 to see relative improvement in performance…

1 Like

I asked the question to the benchmarkers the other day, and now they replied:

The situation with the A311D chipsest is quite complex. First of all, there is no way to access its NPU through Android: it doesn’t support Android NN API (NN HAL is missing), there are no custom TensorFlow Lite delegates for this SoC as well as any proprietary SDKs.

Secondly, even when using Linux - you cannot run the standard TF / TFLite models on this platform: you need to compile them using Amlogic’s NPU SDK provided upon a request. It also looks like this NPU is supporting a limited number of TFLite ops and can accelerate INT8 inference only, which means that just some standard quantized image classification models can be executed on it.

1 Like