Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


products:sbc:edge2:npu:npu-convert

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
products:sbc:edge2:npu:npu-convert [2023/04/09 22:16]
hyphop [Convert]
products:sbc:edge2:npu:npu-convert [2024/04/25 03:47] (current)
louis
Line 1: Line 1:
-====== NPU Model Convert ======+{{indexmenu_n>3}}
  
-===== Introduction =====+====== NPU Model Convert ======
  
-===== Virtual Environment =====+===== Build Virtual Environment =====
  
-The SDK only supports ''python3.6'' or ''python3.8'', here is an example of creating a virtual environment for ''python3.8''.+The SDK only supports **python3.6** or **python3.8**, here is an example of creating a virtual environment for **python3.8**.
  
 Install python packages. Install python packages.
  
 ```shell ```shell
-sudo apt update +sudo apt update 
-sudo apt install python3-dev python3-numpy+sudo apt install python3-dev python3-numpy
 ``` ```
  
Line 19: Line 19:
  
 ```shell ```shell
-conda create -n npu-env python=3.8 +conda create -n npu-env python=3.8 
-conda activate npu-env     #activate +conda activate npu-env     #activate 
-conda deactivate           #deactivate+conda deactivate           #deactivate
 ``` ```
  
-After creating the virtual environment, start the virtual environment and start the next step.+===== Get Convert Tool =====
  
-===== Get SDK ===== +Download Tool from [[gh>rockchip-linux/rknn-toolkit2]].
- +
-Clone SDK from [[https://github.com/rockchip-linux/rknn-toolkit2.git|Rockchip Github]].+
  
 ```shell ```shell
-git clone https://github.com/rockchip-linux/rknn-toolkit2.git +git clone https://github.com/rockchip-linux/rknn-toolkit2.git 
-git checkout 9ad79343fae625f4910242e370035fcbc40cc31a+git checkout 9ad79343fae625f4910242e370035fcbc40cc31a
 ``` ```
- 
-===== Convert ===== 
  
 Install dependences and RKNN toolkit2 packages, Install dependences and RKNN toolkit2 packages,
  
 ```shell ```shell
-cd rknn-toolkit2 +cd rknn-toolkit2 
-sudo apt-get install python3 python3-dev python3-pip +sudo apt-get install python3 python3-dev python3-pip 
-sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake +sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake 
-pip3 install -r doc/requirements_cp38-*.txt +pip3 install -r doc/requirements_cp38-*.txt 
-pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl+pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl
 ``` ```
  
-Choose ''RK3588S'',+===== Convert Model ===== 
 + 
 +Converting model has five main steps. Create RKNN object, pre-process config, load model, build model and export RKNN model. Here, take ''yolov5'' ''onnx'' as an example. 
 + 
 +Create RKNN object.  
 + 
 +```python 
 +# Create RKNN object 
 +rknn = RKNN(verbose=True)  
 +``` 
 + 
 +Pre-process config. 
 + 
 +```python 
 +# pre-process config 
 +print('--> Config model'
 +rknn.config(mean_values=[[00, 0]], std_values=[[255, 255, 255]], target_platform='rk3588'
 +print('done'
 +``` 
 + 
 +  * **mean_values** - The mean of normalization parameter. 
 +  * **std_values** - The variance of normalization parameter. 
 +model input = (image – mean_values) / std_values 
 +  * **target_platform** - Chooses ''rk3588''
 + 
 +Load model. 
 + 
 +```python 
 +# Load ONNX model 
 +print('--> Loading model'
 +ret = rknn.load_onnx(model='./yolov5.onnx'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **model** - The path of model. 
 + 
 +Load other platform model. 
 + 
 +```python 
 +# Load pytorch model 
 +print('--> Loading model'
 +ret = rknn.load_pytorch(model='./resnet18.pt', input_size_list=[[1, 3, 224, 224]]) 
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load tensorflow model 
 +print('--> Loading model'
 +ret = rknn.load_tensorflow(tf_pb='./ssd_mobilenet_v1_coco_2017_11_17.pb', 
 +                           inputs=['Preprocessor/sub'], 
 +                           outputs=['concat', 'concat_1'], 
 +                           input_size_list=[[300, 300, 3]]) 
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load caffe model 
 +print('--> Loading model'
 +ret = rknn.load_caffe(model='./mobilenet_v2.prototxt',  
 +                      blobs='./mobilenet_v2.caffemodel'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load tensorflow lite model 
 +print('--> Loading model'
 +ret = rknn.load_tflite(model='./mobilenet_v1.tflite'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load darknet model 
 +print('--> Loading model'
 +ret = rknn.load_darknet(model='./yolov3-tiny.cfg', 
 +                        weight='./yolov3.weights'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **inputs/outputs** - Only use in tensorflow model. It is the name of inputs/outputs. 
 +  * **input_size_list** - The size and channels of input. 
 + 
 +Build model 
 + 
 +```python 
 +# Build model 
 +print('--> Building model'
 +ret = rknn.build(do_quantization=True, dataset='./dataset.txt'
 +if ret != 0: 
 +    print('Build model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **do_quantization** - Quantize model or not. 
 +  * **dataset** - The path of txt file which is written in image path. 
 + 
 +Export RKNN model 
 + 
 +```python 
 +# Export RKNN model 
 +print('--> Export rknn model'
 +ret = rknn.export_rknn(export_path='./yolov5_int8.rknn'
 +if ret != 0: 
 +    print('Export rknn model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **export_path** - The path of rknn model. 
 + 
 +All the above codes can be found in ''rknn-toolkit2/examples''. There are all platforms we support now. Choose ''rk3588'' in ''rknn-toolkit2/examples/onnx/yolov5/test.py'' and run the file to convert model.
  
 ```diff patch ```diff patch
Line 55: Line 171:
 +++ b/examples/onnx/yolov5/test.py +++ b/examples/onnx/yolov5/test.py
 @@ -240,7 +240,7 @@ if __name__ == '__main__': @@ -240,7 +240,7 @@ if __name__ == '__main__':
- +
      # pre-process config      # pre-process config
      print('--> Config model')      print('--> Config model')
Line 61: Line 177:
 +    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588') +    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')
      print('done')      print('done')
-  +```
-     # Load ONNX model +
- ```+
  
-Convert ''yolov5'' as a example,+Run ''test.py'' to generate rknn model.
  
 ```shell ```shell
-cd examples/onnx/yolov5/ +python3 test.py
-python3 test.py+
 ``` ```
  
-After convert, it will generate the rknn file ''yolov5s.rknn''.+<WRAP tip > 
 +In ''test.py'', there are inferring rknn model codes. You can refer it to infer rknn on PC. 
 +</WRAP>
  
-```shell +===== See Also =====
-$ ls +
-bus.jpg  dataset.txt  onnx_yolov5_0.npy  onnx_yolov5_1.npy  onnx_yolov5_2.npy  test.py  yolov5s.onnx  yolov5s.rknn +
-```+
  
 For more usage, please refer to the related documents under ''doc''. For more usage, please refer to the related documents under ''doc''.
  
  
Last modified: 2023/04/09 22:16 by hyphop