Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


Sidebar

products:sbc:vim4:npu:npu-sdk

NPU Model Convert

Only New VIM4 supports NPU, you can check VIM4 the version

We provided the Docker container for you to convert the model.

Build Docker Environment

Follow Docker official docs to install Docker: Install Docker Engine on Ubuntu.

Follow the script below to get Docker image:

docker pull numbqq/npu-vim4

Get Convert Tool

$ git clone https://gitlab.com/khadas/vim4_npu_sdk.git
$ cd vim4_npu_sdk
$ ls
adla-toolkit-binary  adla-toolkit-binary-1.2.0.9  convert-in-docker.sh  Dockerfile  docs  README.md
  • adla-toolkit-binary/docs - SDK documentations
  • adla-toolkit-binary/bin - SDK tools required for model conversion
  • adla-toolkit-binary/demo - Conversion examples

Convert Model

Convert the demo model in docker:

./convert-in-docker.sh

If everything works well you will find the converted files below.

~/vim4_npu_sdk$ ls adla-toolkit-binary/demo/
caffe_output     darknet_output  dataset.txt   libstdc++_so  mxnet_output  paddle_output   quantized_tflite_output  tflite_output
convert_adla.sh  data            keras_output  model_source  onnx_output   pytorch_output  tensorflow_output

Converted model is xxxx.adla.

If you want to convert your own model, just modify script adla-toolkit-binary/demo/convert_adla.sh and then execute ./convert-in-docker.sh to convert your model.

Important parameters

  • model-type - Model type used in the conversion.
  • model(model files)/weights(model weights file) - model parameter must be set. weights needs to be set when model-type is caffe/darknet/mxnet.
  • inputs/input-shapes - inputs Input node names. input-shapes The sizes of input nodes. Need to be set when model-type is pytorch/onnx/mxnet/tensorflow/ tflite/paddle.
  • outputs - Output node names. When model-type is tensorflow, outputs needs to be set.
  • dtypes - Input type, set the type information corresponding to the input (optional).Default float32.
  • quantize-dtype - Quantization type. Currently, it supports int8, int16 and uint8 quantification types.
  • source-file - dataset.txt. The txt file contains paths to the images that need to be quantized. It supports images and npy files.
  • channel-mean-value - The pre-processing parameters are set according to the pre processing methods used during model training. It includes four values, m1, m2, m3, and scale. The first three are mean-value parameters and the last one is the scale parameter. For the input data with three channels (data1, data2, data3), the pre-processing steps are
    • Out1 = (data1-m1)/scale
    • Out2 = (data2-m2)/scale
    • Out3 = (data3-m3)/scale
  • batch-size - Set the batch-size for the adla file after conversion. Currently, the default value is 1.
  • iterations - Optional parameter, if dataset.txt provides data for multi-groups, and needs to use all data provided for quantization, set iterations and make sure iterations*batch size = Number of data in the multi-groups.
  • outdir - Directory for the generated. The default value is the current directory.
  • target-platform - Specify the target platform for the adla file, it should be PRODUCT_PID0XA003 for VIM4.

See Also

For more information, please check docs/model_conversion_user_guide_1.2.pdf.

Last modified: 2024/04/18 04:33 by nick