This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
products:sbc:edge2:npu:npu-convert [2023/04/09 22:16] hyphop [Convert] |
products:sbc:edge2:npu:npu-convert [2024/06/06 21:57] (current) nick |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== NPU Model Convert ====== | + | {{indexmenu_n> |
- | ===== Introduction | + | ====== Edge2 NPU Model Convert ====== |
- | ===== Virtual Environment ===== | + | ===== Build Virtual Environment ===== |
- | The SDK only supports | + | The SDK only supports |
Install python packages. | Install python packages. | ||
```shell | ```shell | ||
- | sudo apt update | + | $ sudo apt update |
- | sudo apt install python3-dev python3-numpy | + | $ sudo apt install python3-dev python3-numpy |
``` | ``` | ||
Line 19: | Line 19: | ||
```shell | ```shell | ||
- | conda create -n npu-env python=3.8 | + | $ conda create -n npu-env python=3.8 |
- | conda activate npu-env | + | $ conda activate npu-env |
- | conda deactivate | + | $ conda deactivate |
``` | ``` | ||
- | After creating the virtual environment, | + | ===== Get Convert Tool ===== |
- | ===== Get SDK ===== | + | Download Tool from [[gh>rockchip-linux/ |
- | + | ||
- | Clone SDK from [[https:// | + | |
```shell | ```shell | ||
- | git clone https:// | + | $ git clone https:// |
- | git checkout 9ad79343fae625f4910242e370035fcbc40cc31a | + | $ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a |
``` | ``` | ||
- | |||
- | ===== Convert ===== | ||
Install dependences and RKNN toolkit2 packages, | Install dependences and RKNN toolkit2 packages, | ||
```shell | ```shell | ||
- | cd rknn-toolkit2 | + | $ cd rknn-toolkit2 |
- | sudo apt-get install python3 python3-dev python3-pip | + | $ sudo apt-get install python3 python3-dev python3-pip |
- | sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake | + | $ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake |
- | pip3 install -r doc/ | + | $ pip3 install -r doc/ |
- | pip3 install packages/ | + | $ pip3 install packages/ |
``` | ``` | ||
- | Choose | + | ===== Convert Model ===== |
+ | |||
+ | Converting model has five main steps. Create RKNN object, pre-process config, load model, build model and export RKNN model. Here, take '' | ||
+ | |||
+ | Create RKNN object. | ||
+ | |||
+ | ```python | ||
+ | # Create RKNN object | ||
+ | rknn = RKNN(verbose=True) | ||
+ | ``` | ||
+ | |||
+ | Pre-process config. | ||
+ | |||
+ | ```python | ||
+ | # pre-process config | ||
+ | print(' | ||
+ | rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, | ||
+ | print(' | ||
+ | ``` | ||
+ | |||
+ | * **mean_values** - The mean of normalization parameter. | ||
+ | * **std_values** - The variance of normalization parameter. | ||
+ | model input = (image – mean_values) / std_values | ||
+ | * **target_platform** - Chooses '' | ||
+ | |||
+ | Load model. | ||
+ | |||
+ | ```python | ||
+ | # Load ONNX model | ||
+ | print(' | ||
+ | ret = rknn.load_onnx(model=' | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | ``` | ||
+ | |||
+ | * **model** - The path of model. | ||
+ | |||
+ | Load other platform model. | ||
+ | |||
+ | ```python | ||
+ | # Load pytorch model | ||
+ | print(' | ||
+ | ret = rknn.load_pytorch(model=' | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | |||
+ | # Load tensorflow model | ||
+ | print(' | ||
+ | ret = rknn.load_tensorflow(tf_pb=' | ||
+ | | ||
+ | | ||
+ | | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | |||
+ | # Load caffe model | ||
+ | print(' | ||
+ | ret = rknn.load_caffe(model=' | ||
+ | blobs=' | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | |||
+ | # Load tensorflow lite model | ||
+ | print(' | ||
+ | ret = rknn.load_tflite(model=' | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | |||
+ | # Load darknet model | ||
+ | print(' | ||
+ | ret = rknn.load_darknet(model=' | ||
+ | weight=' | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | ``` | ||
+ | |||
+ | * **inputs/ | ||
+ | * **input_size_list** - The size and channels of input. | ||
+ | |||
+ | Build model | ||
+ | |||
+ | ```python | ||
+ | # Build model | ||
+ | print(' | ||
+ | ret = rknn.build(do_quantization=True, | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | ``` | ||
+ | |||
+ | * **do_quantization** - Quantize model or not. | ||
+ | * **dataset** - The path of txt file which is written in image path. | ||
+ | |||
+ | Export RKNN model | ||
+ | |||
+ | ```python | ||
+ | # Export RKNN model | ||
+ | print(' | ||
+ | ret = rknn.export_rknn(export_path=' | ||
+ | if ret != 0: | ||
+ | print(' | ||
+ | exit(ret) | ||
+ | print(' | ||
+ | ``` | ||
+ | |||
+ | * **export_path** - The path of rknn model. | ||
+ | |||
+ | All the above codes can be found in '' | ||
```diff patch | ```diff patch | ||
Line 55: | Line 171: | ||
+++ b/ | +++ b/ | ||
@@ -240,7 +240,7 @@ if __name__ == ' | @@ -240,7 +240,7 @@ if __name__ == ' | ||
- | + | ||
# pre-process config | # pre-process config | ||
| | ||
Line 61: | Line 177: | ||
+ rknn.config(mean_values=[[0, | + rknn.config(mean_values=[[0, | ||
| | ||
- | + | ``` | |
- | # Load ONNX model | + | |
- | ``` | + | |
- | Convert | + | Run '' |
```shell | ```shell | ||
- | cd examples/ | + | $ python3 test.py |
- | python3 test.py | + | |
``` | ``` | ||
- | After convert, it will generate the rknn file '' | + | <WRAP tip > |
+ | In '' | ||
+ | </ | ||
- | ```shell | + | ===== See Also ===== |
- | $ ls | + | |
- | bus.jpg | + | |
- | ``` | + | |
For more usage, please refer to the related documents under '' | For more usage, please refer to the related documents under '' | ||