Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


products:sbc:vim4:npu:npu-sdk

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
products:sbc:vim4:npu:npu-sdk [2023/06/06 06:20]
nick
products:sbc:vim4:npu:npu-sdk [2024/04/18 04:33]
nick [Get Convert Tool]
Line 1: Line 1:
-====== VIM4 NPU SDK Usage ======+~~tag>VIM4 NPU Docker ~~ 
 + 
 +====== NPU Model Convert ====== 
 + 
 +{{indexmenu_n>2}}
  
 <WRAP important > <WRAP important >
-Only **New VIM4** supports NPU, you can check the version of your VIM4 here: [[products:sbc:vim4:configurations:identify-version|]]+Only **New VIM4** supports NPU, you can [[../configurations/identify-version|check VIM4 the version]]
 </WRAP> </WRAP>
  
 <WRAP tip > <WRAP tip >
-SDK compilation is based on the integrated Docker environment.+We provided the Docker container for you to convert the model.
 </WRAP> </WRAP>
  
-===== Install Docker =====+===== Build Docker Environment =====
  
-Follow Docker official docs to install: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]].+Follow Docker official docs to install Docker: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]].
  
-===== Get SDK =====+Follow the script below to get Docker image:
  
 ```shell ```shell
-$ git clone https://gitlab.com/khadas/vim4_npu_sdk.git +docker pull numbqq/npu-vim4
-$ cd vim4npu_sdk & ls +
-bin  demo  docs  README.md+
 ``` ```
  
-  * docs: SDK documentation. +===== Get Convert Tool =====
-  * bin: All compilation tools required for model conversion. +
-  * demo: Conversion examples for different platforms. +
- +
- +
-===== Run in Docker ===== +
- +
-Get Docker,+
  
 ```shell ```shell
-cd vim4npu_sdk +git clone https://gitlab.com/khadas/vim4_npu_sdk.git --depth=1 
-$ docker pull yanwyb/npu:v1 +$ cd vim4_npu_sdk 
-$ docker run -it --name vim4-npu1 -v $(pwd):/home/khadas/npu \ +$ ls 
- -v /etc/localtime:/etc/localtime:ro \ +adla-toolkit-binary  adla-toolkit-binary-1.2.0.9  convert-in-docker.sh  Dockerfile  docs  README.md
- -v /etc/timezone:/etc/timezone:ro \ +
- -v $HOME/.ccache:/home/khadas/.ccache --privileged \ +
- --device=/dev/loop-control:/dev/loop-control \ +
- --device=/dev/loop0:/dev/loop0 --cap-add SYS_ADMIN yanwyb/npu:v1 +
 ``` ```
  
 +  * ''adla-toolkit-binary/docs'' - SDK documentations
 +  * ''adla-toolkit-binary/bin'' - SDK tools required for model conversion
 +  * ''adla-toolkit-binary/demo'' - Conversion examples
  
-Run in docker,+===== Convert Model ===== 
 + 
 +Convert the demo model in docker:
  
 ```shell ```shell
-khadas@2655b6cbbc01:~/npu$ cd demo/ +./convert-in-docker.sh
-khadas@2655b6cbbc01:~/npu/demo$ bash convert_adla.sh+
 ``` ```
  
-Each platform will generate an adla file according to the demo.+If everything works well you will find the converted files below.
  
 ```shell ```shell
-khadas@2655b6cbbc01:~/npu/demo$ ls+~/vim4_npu_sdk$ ls adla-toolkit-binary/demo/
 caffe_output     darknet_output  dataset.txt   libstdc++_so  mxnet_output  paddle_output   quantized_tflite_output  tflite_output caffe_output     darknet_output  dataset.txt   libstdc++_so  mxnet_output  paddle_output   quantized_tflite_output  tflite_output
 convert_adla.sh  data            keras_output  model_source  onnx_output   pytorch_output  tensorflow_output convert_adla.sh  data            keras_output  model_source  onnx_output   pytorch_output  tensorflow_output
 ``` ```
 +
 +Converted model is ''xxxx.adla''.
 +
 +<WRAP tip >
 +If you want to convert your own model, just modify script ''adla-toolkit-binary/demo/convert_adla.sh'' and  then execute ''./convert-in-docker.sh'' to convert your model.
 +</WRAP>
 +
 +
 +==== Important parameters ====
 +
 +  * ''model-type'' - Model type used in the conversion.
 +  * ''model(model files)/weights(model weights file)'' - ''model'' parameter must be set. ''weights'' needs to be set when model-type is caffe/darknet/mxnet.
 +  * ''inputs/input-shapes'' - ''inputs''  Input node names. ''input-shapes''  The sizes of input nodes. Need to be set when model-type is pytorch/onnx/mxnet/tensorflow/ tflite/paddle.
 +  * ''outputs'' - Output node names. When model-type is tensorflow, outputs needs to be set.
 +  * ''dtypes'' - Input type, set the type information corresponding to the input (optional).Default ''float32''.
 +  * ''quantize-dtype'' - Quantization type. Currently, it supports ''int8'', ''int16'' and ''uint8'' quantification types.
 +  * ''source-file'' - ''dataset.txt''. The txt file contains paths to the images that need to be quantized. It supports images and npy files.
 +  * ''channel-mean-value'' - The pre-processing parameters are set according to the pre processing methods used during model training. It includes four values, m1, m2, m3, and scale. The first three are mean-value parameters and the last one is the scale parameter. For the input data with three channels (data1, data2, data3), the pre-processing steps are
 +    * Out1 = (data1-m1)/scale
 +    * Out2 = (data2-m2)/scale
 +    * Out3 = (data3-m3)/scale
 +  * ''batch-size'' - Set the batch-size for the adla file after conversion. Currently, the default value is 1.
 +  * ''iterations'' - Optional parameter, if ''dataset.txt'' provides data for multi-groups, and needs to use all data provided for quantization, set ''iterations'' and make sure iterations*batch size = Number of data in the multi-groups.
 +  * ''outdir'' - Directory for the generated. The default value is the current directory.
 +  * ''target-platform'' - Specify the target platform for the adla file, it should be ''PRODUCT_PID0XA003'' for VIM4.
 +
 +===== See Also =====
 +
 +For more information, please check ''docs/model_conversion_user_guide_1.2.pdf''.
  
Last modified: 2024/04/18 04:33 by nick