Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools



FaceNet PyTorch Edge2 Demo - 6

Get Source Code

The codes we use bubbliiiing/facenet-pytorch.

git clone

Convert Model

Build virtual environment

The SDK only supports python3.6 or python3.8, here is an example of creating a virtual environment for python3.8.

Install python packages.

$ sudo apt update
$ sudo apt install python3-dev python3-numpy

Follow this docs to install conda.

Then create a virtual environment.

$ conda create -n npu-env python=3.8
$ conda activate npu-env     #activate
$ conda deactivate           #deactivate

Get convert tool

Download Tool from rockchip-linux/rknn-toolkit2.

$ git clone
$ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a

Install dependences and RKNN toolkit2 packages.

$ cd rknn-toolkit2
$ sudo apt-get install python3 python3-dev python3-pip
$ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake
$ pip3 install -r doc/requirements_cp38-*.txt
$ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl


After training model, modify facenet-pytorch/nets/ as follows.

diff --git a/nets/ b/nets/
index e7a6fcd..93a81f1 100644
--- a/nets/
+++ b/nets/
@@ -75,7 +75,7 @@ class Facenet(nn.Module):
             x = self.Dropout(x)
             x = self.Bottleneck(x)
             x = self.last_bn(x)
-            x = F.normalize(x, p=2, dim=1)
             return x
         x = self.backbone(x)
         x = self.avg(x)

Create a python file written as follows and run to convert model to onnx.
import torch
import numpy as np
from nets.facenet import Facenet as facenet
model_path = "logs/ep092-loss0.177-val_loss1.547.pth"
net = facenet(backbone="mobilenet", mode="predict").eval()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net.load_state_dict(torch.load(model_path, map_location=device), strict=False)
img = torch.zeros(1, 3, 160, 160)
torch.onnx.export(net, img, "./facenet.onnx", verbose=False, opset_version=12, input_names=['images'])

Enter rknn-toolkit2/examples/onnx/yolov5 and modify as follows.
# Create RKNN object
rknn = RKNN(verbose=True)
# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')
# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model='./facenet.onnx')
if ret != 0:
    print('Load model failed!')
# Build model
print('--> Building model')
ret =, dataset='./dataset.txt')
if ret != 0:
    print('Build model failed!')
# Export RKNN model
print('--> Export rknn model')
ret = rknn.export_rknn('./facenet.rknn')
if ret != 0:
    print('Export rknn model failed!')

Run to generate rknn model.

$ python3


Get source code

Clone the source code from our khadas/edge2-npu.

$ git clone

Install dependencies

$ sudo apt update
$ sudo apt install cmake libopencv-dev

Compile and run

Picture input demo

Put facenet.rknn in edge2-npu/C++/facenet/data/model.

There are two modes of this demo. One is converting face images into feature vectors and saving vectors in face library. Another is comparing input face image with faces in library and outputting Euclidean distance and cosine similarity.

Put library faces in edge2-npu/C++/facenet/img and complie.

# Compile
$ bash
# Run mode 1
$ cd install/facenet
$ ./facenet data/model/facenet.rknn 1

After running mode 1, a file named face_feature_lib will generate in edge2-npu/C++/facenet. Had this file, you can run mode 2.

# Run mode 2
$ ./facenet data/model/facenet.rknn data/img/lin_1.jpg
Last modified: 2023/09/20 03:12 by louis