The codes we use bubbliiiing/facenet-pytorch.
git clone https://github.com/bubbliiiing/facenet-pytorch.git
The SDK only supports python3.6 or python3.8, here is an example of creating a virtual environment for python3.8.
Install python packages.
$ sudo apt update $ sudo apt install python3-dev python3-numpy
Follow this docs to install conda.
Then create a virtual environment.
$ conda create -n npu-env python=3.8 $ conda activate npu-env #activate $ conda deactivate #deactivate
Download Tool from rockchip-linux/rknn-toolkit2.
$ git clone https://github.com/rockchip-linux/rknn-toolkit2.git $ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a
Install dependences and RKNN toolkit2 packages.
$ cd rknn-toolkit2 $ sudo apt-get install python3 python3-dev python3-pip $ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake $ pip3 install -r doc/requirements_cp38-*.txt $ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl
After training model, modify facenet-pytorch/nets/facenet.py as follows.
diff --git a/nets/facenet.py b/nets/facenet.py index e7a6fcd..93a81f1 100644 --- a/nets/facenet.py +++ b/nets/facenet.py @@ -75,7 +75,7 @@ class Facenet(nn.Module): x = self.Dropout(x) x = self.Bottleneck(x) x = self.last_bn(x) - x = F.normalize(x, p=2, dim=1) return x x = self.backbone(x) x = self.avg(x)
Create a python file written as follows and run to convert model to onnx.
import torch import numpy as np from nets.facenet import Facenet as facenet model_path = "logs/ep092-loss0.177-val_loss1.547.pth" net = facenet(backbone="mobilenet", mode="predict").eval() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') net.load_state_dict(torch.load(model_path, map_location=device), strict=False) img = torch.zeros(1, 3, 160, 160) torch.onnx.export(net, img, "./facenet.onnx", verbose=False, opset_version=12, input_names=['images'])
Enter rknn-toolkit2/examples/onnx/yolov5
and modify test.py
as follows.
# Create RKNN object rknn = RKNN(verbose=True) # pre-process config print('--> Config model') rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588') print('done') # Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model='./facenet.onnx') if ret != 0: print('Load model failed!') exit(ret) print('done') # Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset='./dataset.txt') if ret != 0: print('Build model failed!') exit(ret) print('done') # Export RKNN model print('--> Export rknn model') ret = rknn.export_rknn('./facenet.rknn') if ret != 0: print('Export rknn model failed!') exit(ret) print('done')
Run test.py
to generate rknn model.
$ python3 test.py
Clone the source code from our khadas/edge2-npu.
$ git clone https://github.com/khadas/edge2-npu
$ sudo apt update $ sudo apt install cmake libopencv-dev
Put facenet.rknn
in edge2-npu/C++/facenet/data/model
.
There are two modes of this demo. One is converting face images into feature vectors and saving vectors in face library. Another is comparing input face image with faces in library and outputting Euclidean distance and cosine similarity.
Put library faces in edge2-npu/C++/facenet/img
and complie.
# Compile $ bash build.sh # Run mode 1 $ cd install/facenet $ ./facenet data/model/facenet.rknn 1
After running mode 1, a file named face_feature_lib
will generate in edge2-npu/C++/facenet
. Had this file, you can run mode 2.
# Run mode 2
$ ./facenet data/model/facenet.rknn data/img/lin_1.jpg