The SDK only supports python3.6 or python3.8, here is an example of creating a virtual environment for python3.8.
Install python packages.
$ sudo apt update $ sudo apt install python3-dev python3-numpy
Follow this docs to install conda.
Then create a virtual environment.
$ conda create -n npu-env python=3.8 $ conda activate npu-env #activate $ conda deactivate #deactivate
Download Tool from rockchip-linux/rknn-toolkit2.
$ git clone https://github.com/rockchip-linux/rknn-toolkit2.git $ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a
Install dependences and RKNN toolkit2 packages,
$ cd rknn-toolkit2 $ sudo apt-get install python3 python3-dev python3-pip $ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake $ pip3 install -r doc/requirements_cp38-*.txt $ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl
Converting model has five main steps. Create RKNN object, pre-process config, load model, build model and export RKNN model. Here, take yolov5
onnx
as an example.
Create RKNN object.
# Create RKNN object rknn = RKNN(verbose=True)
Pre-process config.
# pre-process config print('--> Config model') rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588') print('done')
model input = (image – mean_values) / std_values
rk3588
.Load model.
# Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model='./yolov5.onnx') if ret != 0: print('Load model failed!') exit(ret) print('done')
Load other platform model.
# Load pytorch model print('--> Loading model') ret = rknn.load_pytorch(model='./resnet18.pt', input_size_list=[[1, 3, 224, 224]]) if ret != 0: print('Load model failed!') exit(ret) print('done') # Load tensorflow model print('--> Loading model') ret = rknn.load_tensorflow(tf_pb='./ssd_mobilenet_v1_coco_2017_11_17.pb', inputs=['Preprocessor/sub'], outputs=['concat', 'concat_1'], input_size_list=[[300, 300, 3]]) if ret != 0: print('Load model failed!') exit(ret) print('done') # Load caffe model print('--> Loading model') ret = rknn.load_caffe(model='./mobilenet_v2.prototxt', blobs='./mobilenet_v2.caffemodel') if ret != 0: print('Load model failed!') exit(ret) print('done') # Load tensorflow lite model print('--> Loading model') ret = rknn.load_tflite(model='./mobilenet_v1.tflite') if ret != 0: print('Load model failed!') exit(ret) print('done') # Load darknet model print('--> Loading model') ret = rknn.load_darknet(model='./yolov3-tiny.cfg', weight='./yolov3.weights') if ret != 0: print('Load model failed!') exit(ret) print('done')
Build model
# Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset='./dataset.txt') if ret != 0: print('Build model failed!') exit(ret) print('done')
Export RKNN model
# Export RKNN model print('--> Export rknn model') ret = rknn.export_rknn(export_path='./yolov5_int8.rknn') if ret != 0: print('Export rknn model failed!') exit(ret) print('done')
All the above codes can be found in rknn-toolkit2/examples
. There are all platforms we support now. Choose rk3588
in rknn-toolkit2/examples/onnx/yolov5/test.py
and run the file to convert model.
diff --git a/examples/onnx/yolov5/test.py b/examples/onnx/yolov5/test.py index a1c9988..f7ce11e 100644 --- a/examples/onnx/yolov5/test.py +++ b/examples/onnx/yolov5/test.py @@ -240,7 +240,7 @@ if __name__ == '__main__': # pre-process config print('--> Config model') - rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]]) + rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588') print('done')
Run test.py
to generate rknn model.
$ python3 test.py
In test.py
, there are inferring rknn model codes. You can refer it to infer rknn on PC.
For more usage, please refer to the related documents under doc
.