Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


Sidebar

products:sbc:vim4:npu:demos:facenet

This is an old revision of the document!


Facenet Pytorch VIM4 Demo - 6

Get Source Code

$ git clone https://github.com/bubbliiiing/facenet-pytorch.git

Convert Model

Build virtual environment

Follow Docker official docs to install Docker: Install Docker Engine on Ubuntu.

Get Docker.

$ docker pull yanwyb/npu:v1
$ docker run -it --name vim4-npu1 -v $(pwd):/home/khadas/npu \
				-v /etc/localtime:/etc/localtime:ro \
				-v /etc/timezone:/etc/timezone:ro \
				yanwyb/npu:v1

Get convert tool

Download Tool from Rockchip Github.

$ git clone https://gitlab.com/khadas/vim4_npu_sdk.git

After training model, modify facenet-pytorch/nets/facenet.py as follows.

diff --git a/nets/facenet.py b/nets/facenet.py
index e7a6fcd..93a81f1 100644
--- a/nets/facenet.py
+++ b/nets/facenet.py
@@ -75,7 +75,7 @@ class Facenet(nn.Module):
             x = self.Dropout(x)
             x = self.Bottleneck(x)
             x = self.last_bn(x)
-            x = F.normalize(x, p=2, dim=1)
             return x
         x = self.backbone(x)
         x = self.avg(x)

Create a python file written as follows and run to convert model to onnx.

export.py
import torch
import numpy as np
from nets.facenet import Facenet as facenet
 
model_path = "logs/ep092-loss0.177-val_loss1.547.pth"
net = facenet(backbone="mobilenet", mode="predict").eval()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net.load_state_dict(torch.load(model_path, map_location=device), strict=False)
 
img = torch.zeros(1, 3, 160, 160)
torch.onnx.export(net, img, "./facenet.onnx", verbose=False, opset_version=12, input_names=['images'])

Enter vim4_npu_sdk/demo and modify convert_adla.sh as follows.

convert_adla.sh
#!/bin/bash
 
ACUITY_PATH=../bin/
#ACUITY_PATH=../python/tvm/
adla_convert=${ACUITY_PATH}adla_convert
 
 
if [ ! -e "$adla_convert" ]; then
    adla_convert=${ACUITY_PATH}adla_convert.py
fi
 
$adla_convert --model-type onnx \
        --model ./model_source/facenet/facenet.onnx \
        --inputs "images" \
        --input-shapes  "3,160,160"  \
        --dtypes "float32" \
        --inference-input-type float32 \
		--inference-output-type float32 \
        --quantize-dtype int8 --outdir onnx_output  \
        --channel-mean-value "0,0,0,255"  \
        --source-file facenet_dataset.txt  \
        --iterations 394 \
        --disable-per-channel False \
        --batch-size 1 --target-platform PRODUCT_PID0XA003

Run convert_adla.sh to generate VIM4 model. The converted model is xxx.adla in onnx_output.

$ bash convert_adla.sh

Run NPU

Get source code

Clone the source code from our khadas/vim4_npu_applications.

$ git clone https://github.com/khadas/vim4_npu_applications.git

Install dependencies

$ sudo apt update
$ sudo apt install libopencv-dev python3-opencv cmake

Compile and run

Picture input demo

There are two modes of this demo. One is converting face images into feature vectors and saving vectors in face library. Another is comparing input face image with faces in library and outputting Euclidean distance and cosine similarity.

Put facenet_int8.adla in vim4_npu_applications/facenet/data/.

# Compile
$ cd vim4_npu_applications/retinaface
$ mkdir build
$ cd build
$ cmake ..
$ make
 
# Run mode 1
$ sudo ./facenet -m ../data/facenet_int8.adla -p 1

After running mode 1, a file named face_feature_lib will generate in vim4_npu_applications/facenet. Had this file, you can run mode 2.

# Run mode 2
$ sudo ./facenet -m ../data/facenet_int8.adla -p ../data/img/lin_2.jpg
Last modified: 2023/09/15 08:32 by sravan