Table of Contents

Face Recognition VIM4 Demo - 7

This demo integrates RetinaFace and FaceNet. Please refer RetinaFace PyTorch VIM4 Demo - 5 and FaceNet PyTorch VIM4 Demo - 6 to convert model. Here only run inference on the NPU.

Run inference on the NPU

Get source code

Clone the source code from our khadas/vim4_npu_applications.

$ git clone https://github.com/khadas/vim4_npu_applications

Install dependencies

$ sudo apt update
$ sudo apt install libopencv-dev python3-opencv cmake

Compile and run

Picture input demo

Like facenet, there are also two modes of this demo. One is converting face images into feature vectors and saving vectors in the face library. Another is comparing input face image with faces in the library and outputting Euclidean distance and cosine similarity.

Put retinaface_int8.adla and facenet_int8.adla in vim4_npu_applications/face_recognition/data/model.

# Compile
$ cd vim4_npu_applications/face_recognition
$ mkdir build
$ cd build
$ cmake ..
$ make
 
# Run mode 1
$ sudo ./face_recognition -M ../data/model/retinaface_int8.adla -m ../data/model/facenet_int8.adla -p 1

After running mode 1, a file named face_feature_lib will generate in vim4_npu_applications/face_recognition. With this file generated, you can run mode 2.

# Run mode 2
$ sudo ./face_recognition -M ../data/model/retinaface_int8.adla -m ../data/model/facenet_int8.adla -p ../data/img/lin_2.jpg

Camera input demo

Put retinaface_int8.adla and facenet_int8.adla in vim4_npu_applications/face_recognition_cap/data/model.

Put your full face photo into the vim4_npu_applications/face_recognition/data/img and run mode 1 to generate face_feature_lib. Put face_feature_lib in vim4_npu_applications/face_recognition_cap.

# Compile
$ cd vim4_npu_applications/face_recognition_cap
$ mkdir build
$ cd build
$ cmake ..
$ make
 
# Run
$ sudo ./face_recognition_cap -M ../data/model/retinaface_int8.adla -m ../data/model/facenet_int8.adla -d 0

0 is the camera device index.