This is an old revision of the document!
The codes we use.
git clone https://github.com/YCG09/chinese_ocr.git
The SDK only supports python3.6 or python3.8, here is an example of creating a virtual environment for python3.8.
Install python packages.
$ sudo apt update $ sudo apt install python3-dev python3-numpy
Follow this docs to install conda.
Then create a virtual environment.
$ conda create -n npu-env python=3.8 $ conda activate npu-env #activate $ conda deactivate #deactivate
Download Tool from rockchip-linux/rknn-toolkit2.
$ git clone https://github.com/rockchip-linux/rknn-toolkit2.git $ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a
Install dependences and RKNN toolkit2 packages.
$ cd rknn-toolkit2 $ sudo apt-get install python3 python3-dev python3-pip $ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake $ pip3 install -r doc/requirements_cp38-*.txt $ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl
After training model, run the codes as follows to modify net input and output and convert model to onnx.
Keras model(.h5) can convert rknn model directly. If you want to convert keras model, please use model.save
to save model with weight and network structure.
import onnx from keras.models import * import keras import keras2onnx from train import get_model import densenet basemodel, model = get_model(32, 88) # input height, classes number basemodel.load_weights("models/weights_densenet-32-0.40.h5") onnx_model = keras2onnx.convert_keras(basemodel, basemodel.name, target_opset=12) onnx_model.graph.input[0].type.tensor_type.shape.dim[0].dim_value = int(1) onnx_model.graph.input[0].type.tensor_type.shape.dim[1].dim_value = int(1) onnx_model.graph.input[0].type.tensor_type.shape.dim[2].dim_value = int(32) onnx_model.graph.input[0].type.tensor_type.shape.dim[3].dim_value = int(280) onnx_model.graph.output[0].type.tensor_type.shape.dim[0].dim_value = int(1) onnx_model.graph.node.remove(onnx_model.graph.node[0]) onnx_model.graph.node[0].input[0] = "the_input" onnx.save_model(onnx_model, "./densenet_ctc.onnx")
Enter rknn-toolkit2/examples/onnx/yolov5
and modify test.py
as follows.
# Create RKNN object rknn = RKNN(verbose=True) # pre-process config print('--> Config model') rknn.config(mean_values=[0], std_values=[255], target_platform='rk3588') print('done') # Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model='./densenet_ctc.onnx') if ret != 0: print('Load model failed!') exit(ret) print('done') # Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset='./dataset.txt') if ret != 0: print('Build model failed!') exit(ret) print('done') # Export RKNN model print('--> Export rknn model') ret = rknn.export_rknn('./densenet_ctc.rknn') if ret != 0: print('Export rknn model failed!') exit(ret) print('done')
Run test.py
to generate rknn model.
$ python3 test.py
Clone the source code form our khadas/edge2-npu.
$ git clone https://github.com/khadas/edge2-npu
$ sudo apt update $ sudo apt install cmake libopencv-dev
Put densenet_ctc.rknn
in edge2-npu/C++/densenet_ctc/data/model
.
# Compile $ bash build.sh # Run $ cd install/densenet_ctc $ ./densenet_ctc data/model/densenet_ctc.rknn data/img/KhadasTeam.png
If your densenet_ctc model classes is not the same as coco, please change data/class_str.txt
and the OBJ_CLASS_NUM
in include/postprocess.h
.