NPU DDK download

Hi All,

Is there a site I can download/apply the NPU DDK for the VIM4?

Richard

Hello @RichardPar

Please check here:

https://docs.khadas.com/products/sbc/vim4/npu/npu-sdk

Thanks… I did look there but could not find a version. I need DDK version 3.4.7.7. as it supports LLM models. (I want to use Ti nyLlama.)

Richard

We will release 3.4.7.7 soon.

Sorry to be a pain… when would ‘soon’ be? (1 Month?)

EDIT: Just a note…


richard@PowerEdge:~/Source/vim4_npu_sdk$ bash convert-in-docker.sh normal
DOCKER_RUN:docker run -it --name npu-vim4 --rm -v /home/richard/Source/vim4_npu_sdk:/home/khadas/npu -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timez      one:ro -v /home/richard:/home/richard numbqq/npu-vim4
convert_adla.sh: line 17:     8 Illegal instruction     (core dumped) $adla_convert --model-type caffe --model ./model_source/caffe_model/resnet-18.prototxt --weigh      ts ./model_source/caffe_model/resnet-18.caffemodel --quantize-dtype int8 --outdir caffe_output --source-file dataset.txt --channel-mean-value "128,128,128,128" --ta      rget-platform PRODUCT_PID0XA001
convert_adla.sh: line 26:    79 Illegal instruction     (core dumped) $adla_convert --model-type darknet --model ./model_source/darknet_model/vgg-conv.cfg --weights       ./model_source/darknet_model/vgg-conv.weights --quantize-dtype uint8 --outdir darknet_output --mean 127.5 --std-dev 127.5 --default-ranges-min 0 --default-ranges-m      ax 1 --batch-size 4
convert_adla.sh: line 34:   150 Illegal instruction     (core dumped) $adla_convert --model-type onnx --model ./model_source/onnx_model/model_UnetBased_0620v8-ep44-      seg3ch.onnx --inputs "0 1 2" --input-shapes "3,288,288#3,288,288#3,288,288" --dtypes "float32#float32#float32" --quantize-dtype int8 --outdir onnx_output --batch-si      ze 4 --target-platform PRODUCT_PID0XA001
convert_adla.sh: line 41:   221 Illegal instruction     (core dumped) $adla_convert --model-type pytorch --model ./model_source/pytorch_model/squeezenet1_0.pt --inp      uts "input" --input-shape "3,224,224" --quantize-dtype int8 --outdir pytorch_output --channel-mean-value "0,0,0,256" --source-file dataset.txt --batch-size 2
convert_adla.sh: line 50:   292 Illegal instruction     (core dumped) $adla_convert --model-type mxnet --model ./model_source/mxnet_model/mobilenet0.25-symbol.json       --weights ./model_source/mxnet_model/mobilenet0.25-0000.params --inputs "data" --input-shape "3,224,224" --quantize-dtype int16 --outdir mxnet_output --channel-mean      -value "0,0,0,256" --source-file dataset.txt --target-platform PRODUCT_PID0XA001

Where you get the information that it supports LLM?

It shoule be the next two weeks.

1 Like