Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


products:sbc:edge2:npu:npu-convert

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
products:sbc:edge2:npu:npu-convert [2022/11/14 20:12]
frank
products:sbc:edge2:npu:npu-convert [2023/08/25 06:24]
louis
Line 1: Line 1:
 ====== NPU Model Convert ====== ====== NPU Model Convert ======
  
-===== Introduction =====+===== Build Virtual Environment =====
  
-===== Virtual Environment =====+The SDK only supports **python3.6** or **python3.8**, here is an example of creating a virtual environment for **python3.8**.
  
-The SDK only supports ''python3.6'' or ''python3.8'', here is an example of creating a virtual environment for ''python3.8''.+Install python packages. 
 + 
 +```shell 
 +$ sudo apt update 
 +$ sudo apt install python3-dev python3-numpy 
 +```
  
 Follow this docs to install [[https://conda.io/projects/conda/en/stable/user-guide/install/linux.html | conda]]. Follow this docs to install [[https://conda.io/projects/conda/en/stable/user-guide/install/linux.html | conda]].
Line 17: Line 22:
 ``` ```
  
-After creating the virtual environment, start the virtual environment and start the next step.+===== Get Convert Tool =====
  
-===== Get SDK ===== +Download Tool from [[gh>rockchip-linux/rknn-toolkit2]].
- +
-Clone SDK from [[https://github.com/rockchip-linux/rknn-toolkit2.git|Rockchip Github]].+
  
 ```shell ```shell
 $ git clone https://github.com/rockchip-linux/rknn-toolkit2.git $ git clone https://github.com/rockchip-linux/rknn-toolkit2.git
 +$ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a
 ``` ```
- 
-===== Convert ===== 
  
 Install dependences and RKNN toolkit2 packages, Install dependences and RKNN toolkit2 packages,
Line 34: Line 36:
 $ cd rknn-toolkit2 $ cd rknn-toolkit2
 $ sudo apt-get install python3 python3-dev python3-pip $ sudo apt-get install python3 python3-dev python3-pip
-$ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc +$ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake 
-$ pip3 install -r doc/requirements_cp38-1.3.0.txt +$ pip3 install -r doc/requirements_cp38-*.txt 
-$ pip3 install packages/rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl+$ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl
 ``` ```
  
-Convert ''yolov5'' as a example,+===== Convert Model =====
  
-```shell +Converting model has five main steps. Create RKNN object, pre-process config, load model, build model and export RKNN model. Here, take ''yolov5'' ''onnx'' as an example. 
-$ cd examples/onnx/yolov5/ + 
-$ python3 test.py+Create RKNN object.  
 + 
 +```python 
 +# Create RKNN object 
 +rknn = RKNN(verbose=True) 
 ``` ```
  
-After convertit will generate the rknn file ''yolov5s.rknn''.+Pre-process config. 
 + 
 +```python 
 +# pre-process config 
 +print('--> Config model'
 +rknn.config(mean_values=[[00, 0]], std_values=[[255, 255, 255]], target_platform='rk3588'
 +print('done'
 +``` 
 + 
 +  * **mean_values** - The mean of normalization parameter. 
 +  * **std_values** - The variance of normalization parameter. 
 +model input = (image – mean_values) / std_values 
 +  * **target_platform** - Chooses ''rk3588''
 + 
 +Load model. 
 + 
 +```python 
 +# Load ONNX model 
 +print('--> Loading model'
 +ret = rknn.load_onnx(model='./yolov5.onnx'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **model** - The path of model. 
 + 
 +Load other platform model. 
 + 
 +```python 
 +# Load pytorch model 
 +print('--> Loading model'
 +ret = rknn.load_pytorch(model='./resnet18.pt', input_size_list=[[1, 3, 224, 224]]) 
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load tensorflow model 
 +print('--> Loading model'
 +ret = rknn.load_tensorflow(tf_pb='./ssd_mobilenet_v1_coco_2017_11_17.pb', 
 +                           inputs=['Preprocessor/sub'], 
 +                           outputs=['concat', 'concat_1'], 
 +                           input_size_list=[[300, 300, 3]]) 
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load caffe model 
 +print('--> Loading model'
 +ret = rknn.load_caffe(model='./mobilenet_v2.prototxt',  
 +                      blobs='./mobilenet_v2.caffemodel'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load tensorflow lite model 
 +print('--> Loading model'
 +ret = rknn.load_tflite(model='./mobilenet_v1.tflite'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 + 
 +# Load darknet model 
 +print('--> Loading model'
 +ret = rknn.load_darknet(model='./yolov3-tiny.cfg', 
 +                        weight='./yolov3.weights'
 +if ret != 0: 
 +    print('Load model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **inputs/outputs** - Only use in tensorflow model. It is the name of inputs/outputs. 
 +  * **input_size_list** - The size and channels of input. 
 + 
 +Build model 
 + 
 +```python 
 +# Build model 
 +print('--> Building model'
 +ret = rknn.build(do_quantization=True, dataset='./dataset.txt'
 +if ret != 0: 
 +    print('Build model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **do_quantization** - Quantize model or not. 
 +  * **dataset** - The path of txt file which is written in image path. 
 + 
 +Export RKNN model 
 + 
 +```python 
 +# Export RKNN model 
 +print('--> Export rknn model'
 +ret = rknn.export_rknn(export_path='./yolov5_int8.rknn'
 +if ret != 0: 
 +    print('Export rknn model failed!'
 +    exit(ret) 
 +print('done'
 +``` 
 + 
 +  * **export_path** - The path of rknn model. 
 + 
 +All the above codes can be found in ''rknn-toolkit2/examples''. There are all platforms we support now. Choose ''rk3588'' in ''rknn-toolkit2/examples/onnx/yolov5/test.py'' and run the file to convert model. 
 + 
 +```diff patch 
 +diff --git a/examples/onnx/yolov5/test.py b/examples/onnx/yolov5/test.py 
 +index a1c9988..f7ce11e 100644 
 +--- a/examples/onnx/yolov5/test.py 
 ++++ b/examples/onnx/yolov5/test.py 
 +@@ -240,7 +240,7 @@ if __name__ == '__main__': 
 + 
 +     # pre-process config 
 +     print('--> Config model'
 +-    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]]) 
 ++    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588'
 +     print('done'
 +``` 
 + 
 +Run ''test.py'' to generate rknn model.
  
 ```shell ```shell
-ls +python3 test.py
-bus.jpg  dataset.txt  onnx_yolov5_0.npy  onnx_yolov5_1.npy  onnx_yolov5_2.npy  test.py  yolov5s.onnx  yolov5s.rknn+
 ``` ```
 +
 +<WRAP tip >
 +In ''test.py'', there are inferring rknn model codes. You can refer it to infer rknn on PC.
 +</WRAP>
 +
 +===== See Also =====
  
 For more usage, please refer to the related documents under ''doc''. For more usage, please refer to the related documents under ''doc''.
  
  
Last modified: 2024/04/25 03:47 by louis