~~tag> NPU FaceNet VIM4 PyTorch~~ ====== FaceNet PyTorch VIM4 Demo - 6 ====== {{indexmenu_n>6}} ===== Get Source Code ===== [[gh>bubbliiiing/facenet-pytorch]] ```shell $ git clone https://github.com/bubbliiiing/facenet-pytorch ``` ===== Convert Model ===== ==== Build virtual environment ==== Follow Docker official documentation to install Docker: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]]. Then fetch the prebuilt NPU Docker container and run it. ```shell $ docker pull yanwyb/npu:v1 $ docker run -it --name vim4-npu1 -v $(pwd):/home/khadas/npu \ -v /etc/localtime:/etc/localtime:ro \ -v /etc/timezone:/etc/timezone:ro \ yanwyb/npu:v1 ``` ==== Get conversion tool ==== Download Tool from [[gl>khadas/vim4_npu_sdk]]. ```shell $ git clone https://gitlab.com/khadas/vim4_npu_sdk ``` After training model, modify ''facenet-pytorch/nets/facenet.py'' as follows. ```diff diff --git a/nets/facenet.py b/nets/facenet.py index e7a6fcd..93a81f1 100644 --- a/nets/facenet.py +++ b/nets/facenet.py @@ -75,7 +75,7 @@ class Facenet(nn.Module): x = self.Dropout(x) x = self.Bottleneck(x) x = self.last_bn(x) - x = F.normalize(x, p=2, dim=1) return x x = self.backbone(x) x = self.avg(x) ``` Create a Python file written as follows and run to convert the model to ONNX. ```python export.py import torch import numpy as np from nets.facenet import Facenet as facenet model_path = "logs/ep092-loss0.177-val_loss1.547.pth" net = facenet(backbone="mobilenet", mode="predict").eval() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') net.load_state_dict(torch.load(model_path, map_location=device), strict=False) img = torch.zeros(1, 3, 160, 160) torch.onnx.export(net, img, "./facenet.onnx", verbose=False, opset_version=12, input_names=['images']) ``` Enter ''vim4_npu_sdk/demo'' and modify ''convert_adla.sh'' as follows. ```bash convert_adla.sh #!/bin/bash ACUITY_PATH=../bin/ #ACUITY_PATH=../python/tvm/ adla_convert=${ACUITY_PATH}adla_convert if [ ! -e "$adla_convert" ]; then adla_convert=${ACUITY_PATH}adla_convert.py fi $adla_convert --model-type onnx \ --model ./model_source/facenet/facenet.onnx \ --inputs "images" \ --input-shapes "3,160,160" \ --dtypes "float32" \ --inference-input-type float32 \ --inference-output-type float32 \ --quantize-dtype int8 --outdir onnx_output \ --channel-mean-value "0,0,0,255" \ --source-file facenet_dataset.txt \ --iterations 394 \ --disable-per-channel False \ --batch-size 1 --target-platform PRODUCT_PID0XA003 ``` Run ''convert_adla.sh'' to generate the VIM4 model. The converted model is ''xxx.adla'' in ''onnx_output''. ```shell $ bash convert_adla.sh ``` ===== Run inference on the NPU ===== ==== Get source code ==== Clone the source code from our [[gh>khadas/vim4_npu_applications]]. ```shell $ git clone https://github.com/khadas/vim4_npu_applications ``` ==== Install dependencies ==== ```shell $ sudo apt update $ sudo apt install libopencv-dev python3-opencv cmake ``` ==== Compile and run ==== === Picture input demo === There are two modes of this demo. One is converting face images into feature vectors and saving vectors in the face library. Another is comparing input face image with faces in the library and outputting Euclidean distance and cosine similarity. Put ''facenet_int8.adla'' in ''vim4_npu_applications/facenet/data/''. ```shell # Compile $ cd vim4_npu_applications/facenet $ mkdir build $ cd build $ cmake .. $ make # Run mode 1 $ sudo ./facenet -m ../data/facenet_int8.adla -p 1 ``` After running mode 1, a file named ''face_feature_lib'' will generate in ''vim4_npu_applications/facenet''. With this file generated, you can run mode 2. ```shell # Run mode 2 $ sudo ./facenet -m ../data/model/facenet_int8.adla -p ../data/img/lin_2.jpg ```