This is an old revision of the document!
Only New VIM4 supports NPU, you can check the version of your VIM4 here: VIM4 Versions
We provided the Docker container for you to convert the model.
Follow Docker official docs to install Docker: Install Docker Engine on Ubuntu.
$ git clone https://gitlab.com/khadas/vim4_npu_sdk.git $ cd vim4_npu_sdk $ ls bin demo docs README.md
Get Docker:
$ cd vim4_npu_sdk $ docker pull yanwyb/npu:v1 $ docker run -it --name vim4-npu1 -v $(pwd):/home/khadas/npu \ -v /etc/localtime:/etc/localtime:ro \ -v /etc/timezone:/etc/timezone:ro \ yanwyb/npu:v1
Convert the demo model in docker:
khadas@2655b6cbbc01:~/npu$ cd demo/ khadas@2655b6cbbc01:~/npu/demo$ bash convert_adla.sh
If everything works well you will find the converted files below:
khadas@2655b6cbbc01:~/npu/demo$ ls
caffe_output darknet_output dataset.txt libstdc++_so mxnet_output paddle_output quantized_tflite_output tflite_output
convert_adla.sh data keras_output model_source onnx_output pytorch_output tensorflow_output
model-type
: Model type used in the conversion.model(model files)/weights(model weights file)
:model
parameter must be set. weights
needs to be set when model-type is caffe/darknet/mxnet.inputs/input-shapes
: inputs
: Input node names. input-shapes
: The sizes of input nodes. Need to be set when model-type is pytorch/onnx/mxnet/tensorflow/ tflite/paddle.outputs
: Output node names. When model-type is tensorflow, outputs needs to be set.dtypes
: Input type, set the type information corresponding to the input (optional).Default: float32.quantize-dtype
: quantization type. Currently, it supports int8, int16 and uint8 quantization types. –source-file: dataset.txt. The txt file contains paths to the images that need to be quantitated. it supports image and npy files.channel-mean-value
: the pre-processing parameters are set according to the pre processing methods used during model training. It includes four values, m1, m2, m3, and scale. The first three are mean-value parameters and the last one is the scale parameter. For the input data with three channels (data1, data2, data3), the pre-processing steps are:batch-size
: Set the batch-size for the adla file after conversion. Currently, the default value is 1. –iterations: Optional parameter, if dataset.txt provides data for multi-groups, and needs to use all data provided for quantization, set –iterations and make sure iterations*batch size= Number of data in the multi-groups.outdir
: Directory for the generated. The default value is the current directory:”./”.target-platform
: specify the target platform for the adla file. Currently, the default value is PRODUCT_PID0XA001. khadas@Khadas:~$ cat /proc/cpuinfo
For more information, please check docs/model_conversion_user_guide_1.2.pdf
.