This demo integrates RetinaFace and FaceNet. Please refer RetinaFace PyTorch VIM4 Demo - 5 and FaceNet PyTorch VIM4 Demo - 6 to convert model. Here only run inference on the NPU.
Clone the source code from our khadas/vim4_npu_applications.
$ git clone https://github.com/khadas/vim4_npu_applications
$ sudo apt update $ sudo apt install libopencv-dev python3-opencv cmake
Like facenet, there are also two modes of this demo. One is converting face images into feature vectors and saving vectors in the face library. Another is comparing input face image with faces in the library and outputting Euclidean distance and cosine similarity.
Put retinaface_int8.adla
and facenet_int8.adla
in vim4_npu_applications/face_recognition/data/model
.
# Compile $ cd vim4_npu_applications/face_recognition $ mkdir build $ cd build $ cmake .. $ make # Run mode 1 $ sudo ./face_recognition -M ../data/model/retinaface_int8.adla -m ../data/model/facenet_int8.adla -p 1
After running mode 1, a file named face_feature_lib
will generate in vim4_npu_applications/face_recognition
. With this file generated, you can run mode 2.
# Run mode 2
$ sudo ./face_recognition -M ../data/model/retinaface_int8.adla -m ../data/model/facenet_int8.adla -p ../data/img/lin_2.jpg
Put retinaface_int8.adla
and facenet_int8.adla
in vim4_npu_applications/face_recognition_cap/data/model
.
Put your full face photo into the vim4_npu_applications/face_recognition/data/img
and run mode 1 to generate face_feature_lib
. Put face_feature_lib
in vim4_npu_applications/face_recognition_cap
.
# Compile $ cd vim4_npu_applications/face_recognition_cap $ mkdir build $ cd build $ cmake .. $ make # Run $ sudo ./face_recognition_cap -M ../data/model/retinaface_int8.adla -m ../data/model/facenet_int8.adla -d 0
0
is the camera device index.