The codes we use bubbliiiing/retinaface-pytorch.
git clone https://github.com/bubbliiiing/retinaface-pytorch
Before training, modify retinaface-pytorch/utils/utils.py
as follows.
diff --git a/utils/utils.py b/utils/utils.py index 87bb528..4a22f2a 100644 --- a/utils/utils.py +++ b/utils/utils.py @@ -25,5 +25,6 @@ def get_lr(optimizer): return param_group['lr'] def preprocess_input(image): - image -= np.array((104, 117, 123),np.float32) + image = image / 255.0 return image
The SDK only supports python3.6 or python3.8, here is an example of creating a virtual environment for python3.8.
Install python packages.
$ sudo apt update $ sudo apt install python3-dev python3-numpy
Follow this docs to install conda.
Then create a virtual environment.
$ conda create -n npu-env python=3.8 $ conda activate npu-env #activate $ conda deactivate #deactivate
Download Tool from rockchip-linux/rknn-toolkit2.
$ git clone https://github.com/rockchip-linux/rknn-toolkit2.git $ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a
Install dependences and RKNN toolkit2 packages,
$ cd rknn-toolkit2 $ sudo apt-get install python3 python3-dev python3-pip $ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake $ pip3 install -r doc/requirements_cp38-*.txt $ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl
After training model, we should convert pytorch model to onnx model. Create a python file written as follows and run.
import torch import numpy as np from nets.retinaface import RetinaFace from utils.config import cfg_mnet, cfg_re50 model_path = "logs/Epoch150-Total_Loss6.2802.pth" net = RetinaFace(cfg=cfg_mnet, mode='eval').eval() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') net.load_state_dict(torch.load(model_path, map_location=device)) img = torch.zeros(1, 3, 640, 640) torch.onnx.export(net, img, "./retinaface.onnx", verbose=False, opset_version=12, input_names=['images'])
Enter rknn-toolkit2/examples/onnx/yolov5
and modify test.py
as follows.
# Create RKNN object rknn = RKNN(verbose=True) # pre-process config print('--> Config model') rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588') print('done') # Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model='./retinaface.onnx') if ret != 0: print('Load model failed!') exit(ret) print('done') # Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset='./dataset.txt') if ret != 0: print('Build model failed!') exit(ret) print('done') # Export RKNN model print('--> Export rknn model') ret = rknn.export_rknn('./retinaface.rknn') if ret != 0: print('Export rknn model failed!') exit(ret) print('done')
Run test.py
to generate rknn model.
$ python3 test.py
Clone the source code from our khadas/edge2-npu.
$ git clone https://github.com/khadas/edge2-npu
$ sudo apt update $ sudo apt install cmake libopencv-dev
Put retinaface.rknn
in edge2-npu/C++/retinaface/data/model
.
# Compile $ bash build.sh # Run $ cd install/retinaface $ ./retinaface data/model/retinaface.rknn data/img/timg.jpg
Put retinaface.rknn
in edge2-npu/C++/retinaface_cap/data/model
.
# Compile $ bash build.sh # Run $ cd install/retinaface_cap $ ./retinaface_cap data/model/retinaface.rknn 33
33
is camera device index.