This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
products:sbc:vim4:npu:demos:facenet [2023/09/15 03:01] sravan [Facenet Pytorch VIM4 Demo - 6] |
products:sbc:vim4:npu:demos:facenet [2026/04/02 02:47] (current) nick |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ~~tag> NPU Facenet | + | ~~tag> NPU FaceNet |
| - | ====== | + | |
| + | **Doc for version ddk-3.4.7.7** | ||
| + | |||
| + | ====== | ||
| {{indexmenu_n> | {{indexmenu_n> | ||
| + | |||
| + | ===== Introduction ===== | ||
| + | |||
| + | FaceNet is a face recognition model. It will convert a face image into a feature map. Compare the feature map between image and face database. Here are two judgment indicators, cosine similarity and Euclidean distance. The closer the cosine similarity is to 1 and the closer the Euclidean distance is to 0, the more similar is between two faces. | ||
| + | |||
| + | Here takes **lin_1.jpg** as example. Inference results on VIM4. | ||
| + | |||
| + | {{: | ||
| + | |||
| ===== Get Source Code ===== | ===== Get Source Code ===== | ||
| + | |||
| + | [[gh> | ||
| ```shell | ```shell | ||
| - | git clone https:// | + | $ git clone https:// |
| ``` | ``` | ||
| Line 13: | Line 27: | ||
| ==== Build virtual environment ==== | ==== Build virtual environment ==== | ||
| - | Follow Docker official | + | Follow Docker official |
| - | Get Docker. | + | Follow the script below to get Docker |
| ```shell | ```shell | ||
| - | $ docker pull yanwyb/npu:v1 | + | docker pull numbqq/npu-vim4 |
| - | $ docker run -it --name | + | |
| - | -v / | + | |
| - | -v / | + | |
| - | yanwyb/ | + | |
| ``` | ``` | ||
| - | ==== Get convert tool ==== | + | ==== Get Convert Tool ==== |
| - | Download Tool from [[gl>khadas/vim4_npu_sdk.git|Rockchip Github]]. | + | You can find the SDK here: [[dl>products/ |
| ```shell | ```shell | ||
| - | $ git clone https://gitlab.com/khadas/ | + | $ wget https://dl.khadas.com/products/ |
| + | $ tar xvzf vim4_npu_sdk-ddk-3.4.7.7-250508.tgz | ||
| + | $ cd vim4_npu_sdk-ddk-3.4.7.7-250508 | ||
| + | $ ls | ||
| + | adla-toolkit-binary | ||
| ``` | ``` | ||
| + | |||
| + | * '' | ||
| + | * '' | ||
| + | * '' | ||
| + | * '' | ||
| + | |||
| + | <WRAP important> | ||
| + | If your kernel is older than 241129, please use branch npu-ddk-1.7.5.5. | ||
| + | </ | ||
| + | |||
| + | ==== Convert ==== | ||
| After training model, modify '' | After training model, modify '' | ||
| Line 50: | Line 75: | ||
| ``` | ``` | ||
| - | Create a python | + | Create a Python |
| ```python export.py | ```python export.py | ||
| Line 68: | Line 93: | ||
| Enter '' | Enter '' | ||
| - | ```shell convert_adla.sh | + | ```bash convert_adla.sh |
| #!/bin/bash | #!/bin/bash | ||
| | | ||
| Line 86: | Line 111: | ||
| --dtypes " | --dtypes " | ||
| --inference-input-type float32 \ | --inference-input-type float32 \ | ||
| - | --inference-output-type float32 \ | + | --inference-output-type float32 \ |
| --quantize-dtype int8 --outdir onnx_output | --quantize-dtype int8 --outdir onnx_output | ||
| --channel-mean-value " | --channel-mean-value " | ||
| Line 95: | Line 120: | ||
| ``` | ``` | ||
| - | Run '' | + | Run '' |
| ```shell | ```shell | ||
| Line 101: | Line 126: | ||
| ``` | ``` | ||
| - | ===== Run NPU ===== | + | ===== Run inference on the NPU ===== |
| ==== Get source code ==== | ==== Get source code ==== | ||
| Line 108: | Line 133: | ||
| ```shell | ```shell | ||
| - | $ git clone https:// | + | $ git clone https:// |
| ``` | ``` | ||
| + | |||
| + | <WRAP important> | ||
| + | If your kernel is older than 241129, please use version before tag ddk-3.4.7.7. | ||
| + | </ | ||
| ==== Install dependencies ==== | ==== Install dependencies ==== | ||
| Line 122: | Line 151: | ||
| === Picture input demo === | === Picture input demo === | ||
| - | There are two modes of this demo. One is converting face images into feature vectors and saving vectors in face library. Another is comparing input face image with faces in library and outputting Euclidean distance and cosine similarity. | + | There are two modes of this demo. One is converting face images into feature vectors and saving vectors in the face library. Another is comparing input face image with faces in the library and outputting Euclidean distance and cosine similarity. |
| Put '' | Put '' | ||
| Line 128: | Line 157: | ||
| ```shell | ```shell | ||
| # Compile | # Compile | ||
| - | $ cd vim4_npu_applications/ | + | $ cd vim4_npu_applications/ |
| $ mkdir build | $ mkdir build | ||
| $ cd build | $ cd build | ||
| Line 135: | Line 164: | ||
| # Run mode 1 | # Run mode 1 | ||
| - | $ sudo ./facenet -m ../ | + | $ ./facenet -m ../ |
| ``` | ``` | ||
| - | After running mode 1, a file named '' | + | After running mode 1, a file named '' |
| ```shell | ```shell | ||
| # Run mode 2 | # Run mode 2 | ||
| - | $ sudo ./facenet -m ../ | + | $ ./facenet -m ../data/model/ |
| ``` | ``` | ||
| + | Here are two comparison methods, **Euclidean distance** and **cosine similarity**. | ||
| + | |||
| + | **Euclidean distance** is smaller, more similar between two faces. | ||
| + | |||
| + | **Cosine similarity** is closer to 1, more similar between two faces. | ||