This demo integrates RetinaFace and FaceNet. Please refer RetinaFace PyTorch Edge2 Demo - 5 and FaceNet PyTorch Edge2 Demo - 6 to convert model. Here only run inference on the NPU.
Clone the source code from our khadas/edge2-npu.
$ git clone https://github.com/khadas/edge2-npu
$ sudo apt update $ sudo apt install cmake libopencv-dev
Like facenet, there are also two modes of this demo. One is converting face images into feature vectors and saving vectors in the face library. Another is comparing input face image with faces in the library and outputting Euclidean distance and cosine similarity.
Put retinaface.rknn
and facenet.rknn
in edge2-npu/C++/retinaface/data/model
.
# Compile $ bash build.sh # Run mode 1 $ cd install/face_recognition $ ./face_recognition data/model/retinaface.rknn data/model/facenet.rknn 1
After running mode 1, a file named face_feature_lib
will generate in edge2-npu/C++/face_recognition/install/face_recognition/data
. With this file generated, you can run mode 2.
# Run mode 2
$ ./face_recognition data/model/retinaface.rknn data/model/facenet.rknn data/img/lin_1.jpg
Put retinaface.rknn
and facenet.rknn
in edge2-npu/C++/face_recognition_cap/data/model
.
# Compile
$ bash build.sh
Put your full face photo into the edge2-npu/C++/face_recognition/data/img
. Recompile and run mode 1 to generate face_feature_lib
. Put face_feature_lib
in edge2-npu/C++/face_recognition_cap/install/face_recognition_cap/data
.
# Run $ cd install/retinaface_cap $ ./face_recognition data/model/retinaface.rknn data/model/facenet.rknn 33
33
is camera device index.