Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


Sidebar

products:sbc:vim4:npu:ksnn:ksnn_convert_model

Doc for version ddk-3.4.7.7

VIM4 KSNN Convert Model

Build Docker Environment

Follow Docker official docs to install Docker: Install Docker Engine on Ubuntu.

Follow the script below to get Docker image:

docker pull numbqq/npu-vim4

Get the Conversion Tool

The conversion tool is integrated in the NPU SDK.

$ git lfs install
$ git lfs clone https://gitlab.com/khadas/vim4_npu_sdk.git
$ cd vim4_npu_sdk
$ ls
adla-toolkit-binary  adla-toolkit-binary-3.1.7.4  convert-in-docker.sh  Dockerfile  docs  ksnn_args.txt  README.md
  • adla-toolkit-binary/docs - SDK documentations
  • adla-toolkit-binary/bin - SDK tools required for model conversion
  • adla-toolkit-binary/python - KSNN conversion examples

If your kernel is older than 241129, please use version before tag ddk-3.4.7.7.

Convert Model

Before converting your model, you should modify ksnn_args.txt. Those in text are the parameters which converting needs.

Please remember to add a space at the end of each parameter.

Finish modification, run following command to convert model.

$ bash convert-in-docker.sh ksnn

If everything works well you will find the converted files below.

~/vim4_npu_sdk$ ls adla-toolkit-binary/python/
convert  data  dataset.txt  onnx_output  yolov8n.onnx

Converted model is xxxx.adla in the parameter outdir you set and dynamic library is also in here.

Now VIM4 KSNN only supports single input model.

Important parameters

  • model-name – The name of generated model.
  • model-type - Model type used in the conversion.
  • model – The path of model file.
  • weights – The path of model weight file. Caffe, DarkNet and MxNet model need set. Such as xxx.caffemodel, xxx.weights, xxx..params.
  • inputs – The input name of model.
  • input-shapes – The input shape of model.
  • outputs – The output name of model. Only TensorFlow model must set.
  • dtypes – Describe the type of input.
  • quantize-dtype – Quantization type. Currently, it supports int8, int16 and uint8.
  • outdir – The path of folder which saves converted model and dynamic library.
  • channel-mean-value – The normalization of model. Args “m1,m2,m3,scale”. Image (data1, data2, data3).
    • Out1 = (data1-m1)/scale
    • Out2 = (data2-m2)/scale
    • Out3 = (data3-m3)/scale
  • source-file – Dataset path and filename.
  • iterations – Number of iterations to run.
  • batch-size – Batch size of quantification model.
  • kboard – Choose khadas board. Now only support VIM4.
  • disable-per-channel – Disable per-channel quantitation. Default True.
  • print-level – Information printing level. Default 0. Option 0, 1.
  • inference-input-type – After converting to adla, the input data type of the model.
  • inference-output-type – After converting to adla, the output data type of the model.

If you use those two parameters, when you infer on VIM4, you need to use “RAW” in KSNN interface. Now those two parameters only support ‘’float32’’

Last modified: 2024/12/05 04:09 by louis