This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
products:sbc:edge2:npu:demos:retinaface [2023/08/28 02:39] louis ↷ Page moved from products:sbc:edge2:npu:demo5-retinaface to products:sbc:edge2:npu:npu-demos:demo5-retinaface |
products:sbc:edge2:npu:demos:retinaface [2025/04/09 23:33] (current) louis |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Demo5 - Retinaface ====== | + | ~~tag> NPU RetinaFace Edge2 PyTorch~~ |
- | ===== Get Source Code ===== | + | ====== RetinaFace PyTorch Edge2 Demo - 5 ====== |
- | The codes we use. | + | {{indexmenu_n> |
+ | |||
+ | ===== Introduction ===== | ||
+ | |||
+ | RetinaFace is a face detection model. It can draw five key points on each face, including two eyes, nose and two corners of mouth. | ||
+ | |||
+ | Inference results on Edge2. | ||
+ | |||
+ | {{: | ||
+ | |||
+ | **Inference speed test**: USB camera about **39ms** per frame. MIPI camera about **33ms** per frame. | ||
+ | |||
+ | ===== Train Model ===== | ||
+ | |||
+ | The codes we use [[gh> | ||
```shell | ```shell | ||
git clone https:// | git clone https:// | ||
+ | ``` | ||
+ | |||
+ | Before training, modify '' | ||
+ | |||
+ | ```diff | ||
+ | diff --git a/ | ||
+ | index 87bb528..4a22f2a 100644 | ||
+ | --- a/ | ||
+ | +++ b/ | ||
+ | @@ -25,5 +25,6 @@ def get_lr(optimizer): | ||
+ | | ||
+ | |||
+ | def preprocess_input(image): | ||
+ | - image -= np.array((104, | ||
+ | + image = image / 255.0 | ||
+ | | ||
``` | ``` | ||
Line 52: | Line 82: | ||
==== Convert ==== | ==== Convert ==== | ||
- | |||
- | Before training, modify '' | ||
- | |||
- | ```diff | ||
- | diff --git a/ | ||
- | index 87bb528..4a22f2a 100644 | ||
- | --- a/ | ||
- | +++ b/ | ||
- | @@ -25,5 +25,6 @@ def get_lr(optimizer): | ||
- | | ||
- | |||
- | def preprocess_input(image): | ||
- | - image -= np.array((104, | ||
- | + image = image / 255.0 | ||
- | | ||
- | ``` | ||
After training model, we should convert pytorch model to onnx model. Create a python file written as follows and run. | After training model, we should convert pytorch model to onnx model. Create a python file written as follows and run. | ||
Line 132: | Line 146: | ||
==== Get source code ==== | ==== Get source code ==== | ||
- | Clone the source code form our [[gh> | + | Clone the source code from our [[gh> |
```shell | ```shell | ||
Line 168: | Line 182: | ||
$ bash build.sh | $ bash build.sh | ||
- | # Run | + | # Run USB camera |
+ | $ cd install/ | ||
+ | $ ./ | ||
+ | |||
+ | # Run MIPI camera | ||
$ cd install/ | $ cd install/ | ||
- | $ ./ | + | $ ./ |
``` | ``` | ||
- | '' | + | <WRAP info > |
+ | '' | ||
+ | </ | ||