Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


products:sbc:vim4:npu:npu-sdk

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
products:sbc:vim4:npu:npu-sdk [2023/06/15 06:08]
nick
products:sbc:vim4:npu:npu-sdk [2024/04/18 04:33] (current)
nick [Get Convert Tool]
Line 1: Line 1:
-====== VIM4 NPU SDK Usage ======+~~tag>VIM4 NPU Docker ~~ 
 + 
 +====== NPU Model Convert ====== 
 + 
 +{{indexmenu_n>2}}
  
 <WRAP important > <WRAP important >
-Only **New VIM4** supports NPU, you can check the version of your VIM4 here: [[products:sbc:vim4:configurations:identify-version|]]+Only **New VIM4** supports NPU, you can [[../configurations/identify-version|check VIM4 the version]]
 </WRAP> </WRAP>
  
Line 9: Line 13:
 </WRAP> </WRAP>
  
-===== Install Docker =====+===== Build Docker Environment =====
  
 Follow Docker official docs to install Docker: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]]. Follow Docker official docs to install Docker: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]].
  
-===== Get NPU SDK =====+Follow the script below to get Docker image:
  
 ```shell ```shell
-$ git clone https://gitlab.com/khadas/vim4_npu_sdk.git +docker pull numbqq/npu-vim4
-$ cd vim4_npu_sdk +
-$ ls +
-bin  demo  docs  README.md+
 ``` ```
  
-  * docs: SDK documentations +===== Get Convert Tool =====
-  * bin: SDK tools required for model conversion +
-  * demo: Conversion examples +
- +
-===== Convert model in Docker ===== +
- +
-Get Docker:+
  
 ```shell ```shell
 +$ git clone https://gitlab.com/khadas/vim4_npu_sdk.git
 $ cd vim4_npu_sdk $ cd vim4_npu_sdk
-docker pull yanwyb/npu:v1 +ls 
-$ docker run -it --name vim4-npu1 -v $(pwd):/home/khadas/npu \ +adla-toolkit-binary  adla-toolkit-binary-1.2.0.9  convert-in-docker.sh  Dockerfile  docs  README.md
- -v /etc/localtime:/etc/localtime:ro \ +
- -v /etc/timezone:/etc/timezone:ro \ +
- yanwyb/npu:v1 +
 ``` ```
 +
 +  * ''adla-toolkit-binary/docs'' - SDK documentations
 +  * ''adla-toolkit-binary/bin'' - SDK tools required for model conversion
 +  * ''adla-toolkit-binary/demo'' - Conversion examples
 +
 +===== Convert Model =====
  
 Convert the demo model in docker: Convert the demo model in docker:
  
 ```shell ```shell
-khadas@2655b6cbbc01:~/npu$ cd demo/ +./convert-in-docker.sh
-khadas@2655b6cbbc01:~/npu/demo$ bash convert_adla.sh+
 ``` ```
  
-If everything works well you will find the converted files below:+If everything works well you will find the converted files below.
  
 ```shell ```shell
-khadas@2655b6cbbc01:~/npu/demo$ ls+~/vim4_npu_sdk$ ls adla-toolkit-binary/demo/
 caffe_output     darknet_output  dataset.txt   libstdc++_so  mxnet_output  paddle_output   quantized_tflite_output  tflite_output caffe_output     darknet_output  dataset.txt   libstdc++_so  mxnet_output  paddle_output   quantized_tflite_output  tflite_output
 convert_adla.sh  data            keras_output  model_source  onnx_output   pytorch_output  tensorflow_output convert_adla.sh  data            keras_output  model_source  onnx_output   pytorch_output  tensorflow_output
 ``` ```
  
-===== Parameter =====+Converted model is ''xxxx.adla''
 + 
 +<WRAP tip > 
 +If you want to convert your own model, just modify script ''adla-toolkit-binary/demo/convert_adla.sh'' and  then execute ''./convert-in-docker.sh'' to convert your model. 
 +</WRAP> 
 + 
 + 
 +==== Important parameters ====
  
   * ''model-type'' - Model type used in the conversion.   * ''model-type'' - Model type used in the conversion.
   * ''model(model files)/weights(model weights file)'' - ''model'' parameter must be set. ''weights'' needs to be set when model-type is caffe/darknet/mxnet.   * ''model(model files)/weights(model weights file)'' - ''model'' parameter must be set. ''weights'' needs to be set when model-type is caffe/darknet/mxnet.
-  * ''inputs/input-shapes'' - ''inputs''Input node names. ''input-shapes''The sizes of input nodes. Need to be set when model-type is pytorch/onnx/mxnet/tensorflow/ tflite/paddle. +  * ''inputs/input-shapes'' - ''inputs''  Input node names. ''input-shapes''  The sizes of input nodes. Need to be set when model-type is pytorch/onnx/mxnet/tensorflow/ tflite/paddle. 
-  * ''outputs'' Output node names. When model-type is tensorflow, outputs needs to be set. +  * ''outputs'' - Output node names. When model-type is tensorflow, outputs needs to be set. 
-  * ''dtypes'' - Input type, set the type information corresponding to the input (optional).Defaultfloat32. +  * ''dtypes'' - Input type, set the type information corresponding to the input (optional).Default ''float32''
-  * ''quantize-dtype''quantization type. Currently, it supports int8, int16 and uint8 quantization types. --source-filedataset.txt. The txt file contains paths to the images that need to be quantitatedit supports image and npy files. +  * ''quantize-dtype''Quantization type. Currently, it supports ''int8''''int16'' and ''uint8'' quantification types. 
-  * ''channel-mean-value''the pre-processing parameters are set according to the pre processing methods used during model training. It includes four values, m1, m2, m3, and scale. The first three are mean-value parameters and the last one is the scale parameter. For the input data with three channels (data1, data2, data3), the pre-processing steps are:+  * ''source-file'' - ''dataset.txt''. The txt file contains paths to the images that need to be quantizedIt supports images and npy files. 
 +  * ''channel-mean-value''The pre-processing parameters are set according to the pre processing methods used during model training. It includes four values, m1, m2, m3, and scale. The first three are mean-value parameters and the last one is the scale parameter. For the input data with three channels (data1, data2, data3), the pre-processing steps are
     * Out1 = (data1-m1)/scale     * Out1 = (data1-m1)/scale
     * Out2 = (data2-m2)/scale     * Out2 = (data2-m2)/scale
     * Out3 = (data3-m3)/scale     * Out3 = (data3-m3)/scale
-  * ''batch-size'' - Set the batch-size for the adla file after conversion. Currently, the default value is 1. --iterationsOptional parameter, if dataset.txt provides data for multi-groups, and needs to use all data provided for quantization, set iterations and make sure iterations*batch size= Number of data in the multi-groups. +  * ''batch-size'' - Set the batch-size for the adla file after conversion. Currently, the default value is 1. 
-  * ''outdir'' - Directory for the generated. The default value is the current directory:”./”+  * ''iterations''Optional parameter, if ''dataset.txt'' provides data for multi-groups, and needs to use all data provided for quantization, set ''iterations'' and make sure iterations*batch size = Number of data in the multi-groups. 
-  * ''target-platform''specify the target platform for the adla file, it should be ''PRODUCT_PID0XA003'' for VIM4 +  * ''outdir'' - Directory for the generated. The default value is the current directory. 
 +  * ''target-platform''Specify the target platform for the adla file, it should be ''PRODUCT_PID0XA003'' for VIM4.
  
 ===== See Also ===== ===== See Also =====
  
 For more information, please check ''docs/model_conversion_user_guide_1.2.pdf''. For more information, please check ''docs/model_conversion_user_guide_1.2.pdf''.
 +
Last modified: 2023/06/15 06:08 by nick