This is an old revision of the document!
Only New VIM4 supports NPU, you can check VIM4 the version
We provided the Docker container for you to convert the model.
Follow Docker official docs to install Docker: Install Docker Engine on Ubuntu.
Follow the script below to get Docker image:
docker pull numbqq/npu-vim4
$ git lfs install $ git lfs clone https://gitlab.com/khadas/vim4_npu_sdk.git $ cd vim4_npu_sdk $ ls adla-toolkit-binary adla-toolkit-binary-1.2.0.9 convert-in-docker.sh Dockerfile docs README.md
adla-toolkit-binary/docs
- SDK documentationsadla-toolkit-binary/bin
- SDK tools required for model conversionadla-toolkit-binary/demo
- Conversion examplesConvert the demo model in docker:
bash convert-in-docker.sh normal
If everything works well you will find the converted files below.
~/vim4_npu_sdk$ ls adla-toolkit-binary/demo/
caffe_output darknet_output dataset.txt libstdc++_so mxnet_output paddle_output quantized_tflite_output tflite_output
convert_adla.sh data keras_output model_source onnx_output pytorch_output tensorflow_output
Converted model is xxxx.adla
.
If you want to convert your own model, just modify script adla-toolkit-binary/demo/convert_adla.sh
and then execute ./convert-in-docker.sh
to convert your model.
model-type
- Model type used in the conversion.model(model files)/weights(model weights file)
- model
parameter must be set. weights
needs to be set when model-type is caffe/darknet/mxnet.inputs/input-shapes
- inputs
Input node names. input-shapes
The sizes of input nodes. Need to be set when model-type is pytorch/onnx/mxnet/tensorflow/ tflite/paddle.outputs
- Output node names. When model-type is tensorflow, outputs needs to be set.dtypes
- Input type, set the type information corresponding to the input (optional).Default float32
.quantize-dtype
- Quantization type. Currently, it supports int8
, int16
and uint8
quantification types.source-file
- dataset.txt
. The txt file contains paths to the images that need to be quantized. It supports images and npy files.channel-mean-value
- The pre-processing parameters are set according to the pre processing methods used during model training. It includes four values, m1, m2, m3, and scale. The first three are mean-value parameters and the last one is the scale parameter. For the input data with three channels (data1, data2, data3), the pre-processing steps arebatch-size
- Set the batch-size for the adla file after conversion. Currently, the default value is 1.iterations
- Optional parameter, if dataset.txt
provides data for multi-groups, and needs to use all data provided for quantization, set iterations
and make sure iterations*batch size = Number of data in the multi-groups.outdir
- Directory for the generated. The default value is the current directory.target-platform
- Specify the target platform for the adla file, it should be PRODUCT_PID0XA003
for VIM4.
For more information, please check docs/model_conversion_user_guide_1.2.pdf
.