{{indexmenu_n>3}} ====== NPU Model Convert ====== ===== Build Virtual Environment ===== The SDK only supports **python3.6** or **python3.8**, here is an example of creating a virtual environment for **python3.8**. Install python packages. ```shell $ sudo apt update $ sudo apt install python3-dev python3-numpy ``` Follow this docs to install [[https://conda.io/projects/conda/en/stable/user-guide/install/linux.html | conda]]. Then create a virtual environment. ```shell $ conda create -n npu-env python=3.8 $ conda activate npu-env #activate $ conda deactivate #deactivate ``` ===== Get Convert Tool ===== Download Tool from [[gh>rockchip-linux/rknn-toolkit2]]. ```shell $ git clone https://github.com/rockchip-linux/rknn-toolkit2.git $ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a ``` Install dependences and RKNN toolkit2 packages, ```shell $ cd rknn-toolkit2 $ sudo apt-get install python3 python3-dev python3-pip $ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake $ pip3 install -r doc/requirements_cp38-*.txt $ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl ``` ===== Convert Model ===== Converting model has five main steps. Create RKNN object, pre-process config, load model, build model and export RKNN model. Here, take ''yolov5'' ''onnx'' as an example. Create RKNN object. ```python # Create RKNN object rknn = RKNN(verbose=True) ``` Pre-process config. ```python # pre-process config print('--> Config model') rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588') print('done') ``` * **mean_values** - The mean of normalization parameter. * **std_values** - The variance of normalization parameter. model input = (image – mean_values) / std_values * **target_platform** - Chooses ''rk3588''. Load model. ```python # Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model='./yolov5.onnx') if ret != 0: print('Load model failed!') exit(ret) print('done') ``` * **model** - The path of model. Load other platform model. ```python # Load pytorch model print('--> Loading model') ret = rknn.load_pytorch(model='./resnet18.pt', input_size_list=[[1, 3, 224, 224]]) if ret != 0: print('Load model failed!') exit(ret) print('done') # Load tensorflow model print('--> Loading model') ret = rknn.load_tensorflow(tf_pb='./ssd_mobilenet_v1_coco_2017_11_17.pb', inputs=['Preprocessor/sub'], outputs=['concat', 'concat_1'], input_size_list=[[300, 300, 3]]) if ret != 0: print('Load model failed!') exit(ret) print('done') # Load caffe model print('--> Loading model') ret = rknn.load_caffe(model='./mobilenet_v2.prototxt', blobs='./mobilenet_v2.caffemodel') if ret != 0: print('Load model failed!') exit(ret) print('done') # Load tensorflow lite model print('--> Loading model') ret = rknn.load_tflite(model='./mobilenet_v1.tflite') if ret != 0: print('Load model failed!') exit(ret) print('done') # Load darknet model print('--> Loading model') ret = rknn.load_darknet(model='./yolov3-tiny.cfg', weight='./yolov3.weights') if ret != 0: print('Load model failed!') exit(ret) print('done') ``` * **inputs/outputs** - Only use in tensorflow model. It is the name of inputs/outputs. * **input_size_list** - The size and channels of input. Build model ```python # Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset='./dataset.txt') if ret != 0: print('Build model failed!') exit(ret) print('done') ``` * **do_quantization** - Quantize model or not. * **dataset** - The path of txt file which is written in image path. Export RKNN model ```python # Export RKNN model print('--> Export rknn model') ret = rknn.export_rknn(export_path='./yolov5_int8.rknn') if ret != 0: print('Export rknn model failed!') exit(ret) print('done') ``` * **export_path** - The path of rknn model. All the above codes can be found in ''rknn-toolkit2/examples''. There are all platforms we support now. Choose ''rk3588'' in ''rknn-toolkit2/examples/onnx/yolov5/test.py'' and run the file to convert model. ```diff patch diff --git a/examples/onnx/yolov5/test.py b/examples/onnx/yolov5/test.py index a1c9988..f7ce11e 100644 --- a/examples/onnx/yolov5/test.py +++ b/examples/onnx/yolov5/test.py @@ -240,7 +240,7 @@ if __name__ == '__main__': # pre-process config print('--> Config model') - rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]]) + rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588') print('done') ``` Run ''test.py'' to generate rknn model. ```shell $ python3 test.py ``` In ''test.py'', there are inferring rknn model codes. You can refer it to infer rknn on PC. ===== See Also ===== For more usage, please refer to the related documents under ''doc''.