Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools



RetinaFace PyTorch Edge2 Demo - 5

Get Source Code

The codes we use bubbliiiing/retinaface-pytorch.

git clone

Before training, modify retinaface-pytorch/utils/ as follows.

diff --git a/utils/ b/utils/
index 87bb528..4a22f2a 100644
--- a/utils/
+++ b/utils/
@@ -25,5 +25,6 @@ def get_lr(optimizer):
         return param_group['lr']
 def preprocess_input(image):
-    image -= np.array((104, 117, 123),np.float32)
+    image = image / 255.0
     return image

Convert Model

Build virtual environment

The SDK only supports python3.6 or python3.8, here is an example of creating a virtual environment for python3.8.

Install python packages.

$ sudo apt update
$ sudo apt install python3-dev python3-numpy

Follow this docs to install conda.

Then create a virtual environment.

$ conda create -n npu-env python=3.8
$ conda activate npu-env     #activate
$ conda deactivate           #deactivate

Get convert tool

Download Tool from rockchip-linux/rknn-toolkit2.

$ git clone
$ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a

Install dependences and RKNN toolkit2 packages,

$ cd rknn-toolkit2
$ sudo apt-get install python3 python3-dev python3-pip
$ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake
$ pip3 install -r doc/requirements_cp38-*.txt
$ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl


After training model, we should convert pytorch model to onnx model. Create a python file written as follows and run.
import torch
import numpy as np
from nets.retinaface import RetinaFace
from utils.config import cfg_mnet, cfg_re50
model_path = "logs/Epoch150-Total_Loss6.2802.pth"
net = RetinaFace(cfg=cfg_mnet, mode='eval').eval()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net.load_state_dict(torch.load(model_path, map_location=device))
img = torch.zeros(1, 3, 640, 640)
torch.onnx.export(net, img, "./retinaface.onnx", verbose=False, opset_version=12, input_names=['images'])

Enter rknn-toolkit2/examples/onnx/yolov5 and modify as follows.
# Create RKNN object
rknn = RKNN(verbose=True)
# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')
# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model='./retinaface.onnx')
if ret != 0:
    print('Load model failed!')
# Build model
print('--> Building model')
ret =, dataset='./dataset.txt')
if ret != 0:
    print('Build model failed!')
# Export RKNN model
print('--> Export rknn model')
ret = rknn.export_rknn('./retinaface.rknn')
if ret != 0:
    print('Export rknn model failed!')

Run to generate rknn model.

$ python3


Get source code

Clone the source code from our khadas/edge2-npu.

$ git clone

Install dependencies

$ sudo apt update
$ sudo apt install cmake libopencv-dev

Compile and run

Picture input demo

Put retinaface.rknn in edge2-npu/C++/retinaface/data/model.

# Compile
$ bash
# Run
$ cd install/retinaface
$ ./retinaface data/model/retinaface.rknn data/img/timg.jpg

Camera input demo

Put retinaface.rknn in edge2-npu/C++/retinaface_cap/data/model.

# Compile
$ bash
# Run
$ cd install/retinaface_cap
$ ./retinaface_cap data/model/retinaface.rknn 33

33 is camera device index.

Last modified: 2023/09/20 03:12 by louis