This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
products:sbc:vim4:npu:demos:retinaface [2024/01/04 05:16] louis |
products:sbc:vim4:npu:demos:retinaface [2025/01/08 22:29] (current) louis |
||
---|---|---|---|
Line 1: | Line 1: | ||
~~tag> NPU RetinaFace VIM4 PyTorch~~ | ~~tag> NPU RetinaFace VIM4 PyTorch~~ | ||
+ | |||
+ | **Doc for version ddk-3.4.7.7** | ||
====== RetinaFace PyTorch VIM4 Demo - 5 ====== | ====== RetinaFace PyTorch VIM4 Demo - 5 ====== | ||
Line 34: | Line 36: | ||
Follow Docker official documentation to install Docker: [[https:// | Follow Docker official documentation to install Docker: [[https:// | ||
- | Then fetch the prebuilt NPU Docker | + | Follow |
```shell | ```shell | ||
- | $ docker pull yanwyb/npu:v1 | + | docker pull numbqq/npu-vim4 |
- | $ docker run -it --name | + | |
- | -v / | + | |
- | -v / | + | |
- | yanwyb/ | + | |
``` | ``` | ||
- | ==== Get conversion tool ==== | + | ==== Get Convert Tool ==== |
- | Download Tool from [[gl> | + | Download Tool from [[gh> |
```shell | ```shell | ||
- | $ git clone https://gitlab.com/ | + | $ git lfs install |
+ | $ git lfs clone https://github.com/ | ||
+ | $ cd vim4_npu_sdk | ||
+ | $ ls | ||
+ | adla-toolkit-binary | ||
``` | ``` | ||
- | ==== Convert ==== | + | * '' |
+ | * '' | ||
+ | * '' | ||
- | After training the model, we should convert the PyTorch model into an ONNX model. | + | <WRAP important> |
+ | If your kernel is older than 241129, please use branch npu-ddk-1.7.5.5. | ||
+ | </ | ||
- | Copy '' | + | ==== Convert |
- | + | ||
- | ```diff | + | |
- | class ClassHead(nn.Module): | + | |
- | def __init__(self, | + | |
- | super(ClassHead, | + | |
- | self.num_anchors | + | |
- | self.conv1x1 | + | |
- | + | ||
- | def forward(self, | + | |
- | out = self.conv1x1(x) | + | |
- | - out = out.permute(0, | + | |
- | + out = out.contiguous() | + | |
- | + | ||
- | - | + | |
- | + | + | |
- | + | ||
- | class BboxHead(nn.Module): | + | |
- | def __init__(self, | + | |
- | super(BboxHead, | + | |
- | self.conv1x1 = nn.Conv2d(inchannels, | + | |
- | + | ||
- | def forward(self, | + | |
- | out = self.conv1x1(x) | + | |
- | - out = out.permute(0, | + | |
- | + out = out.contiguous() | + | |
- | + | ||
- | - | + | |
- | + | + | |
- | + | ||
- | class LandmarkHead(nn.Module): | + | |
- | def __init__(self, | + | |
- | super(LandmarkHead, | + | |
- | self.conv1x1 = nn.Conv2d(inchannels, | + | |
- | + | ||
- | def forward(self, | + | |
- | out = self.conv1x1(x) | + | |
- | - out = out.permute(0, | + | |
- | + out = out.contiguous() | + | |
- | + | ||
- | - | + | |
- | + | + | |
- | ``` | + | |
- | + | ||
- | ```diff | + | |
- | - | + | |
- | - | + | |
- | - | + | |
- | + | + | |
- | + | + | |
- | + | + | |
- | + | ||
- | if self.mode == ' | + | |
- | output = (bbox_regressions, | + | |
- | else: | + | |
- | - | + | |
- | + | + | |
- | return output | + | |
- | ``` | + | |
- | Create the Python conversion script as follows and run. | + | After training the model, we should convert the PyTorch model into an ONNX model. |
```python export.py | ```python export.py | ||
import torch | import torch | ||
import numpy as np | import numpy as np | ||
- | from nets.retinaface_export | + | from nets.retinaface |
from utils.config import cfg_mnet, cfg_re50 | from utils.config import cfg_mnet, cfg_re50 | ||
Line 151: | Line 99: | ||
--inputs " | --inputs " | ||
--input-shapes | --input-shapes | ||
+ | --dtypes " | ||
--inference-input-type float32 \ | --inference-input-type float32 \ | ||
--inference-output-type float32 \ | --inference-output-type float32 \ | ||
- | --dtypes " | ||
--quantize-dtype int8 --outdir onnx_output | --quantize-dtype int8 --outdir onnx_output | ||
--channel-mean-value " | --channel-mean-value " | ||
- | --source-file ./dataset.txt \ | + | --source-file ./retinaface_dataset.txt \ |
--iterations 500 \ | --iterations 500 \ | ||
--disable-per-channel False \ | --disable-per-channel False \ | ||
--batch-size 1 --target-platform PRODUCT_PID0XA003 | --batch-size 1 --target-platform PRODUCT_PID0XA003 | ||
``` | ``` | ||
- | |||
- | <WRAP important > | ||
- | Please prepare about 500 pictures for quantification. If the pictures size is smaller than model input size, please resize pictures to input size before quantification. | ||
- | </ | ||
Run '' | Run '' | ||
Line 182: | Line 126: | ||
``` | ``` | ||
- | <WRAP important > | + | <WRAP important> |
- | If your kernel | + | If your kernel is older than 241129, please use version before |
</ | </ | ||
Line 208: | Line 152: | ||
# Run | # Run | ||
- | $ sudo ./ | + | $ ./ |
``` | ``` | ||
+ | |||
+ | {{: | ||
=== Camera input demo === | === Camera input demo === | ||
Put '' | Put '' | ||
+ | |||
+ | == Compile == | ||
```shell | ```shell | ||
- | # Compile | + | $ cd vim4_npu_applications/ |
- | $ cd vim4_npu_applications/ | + | |
$ mkdir build | $ mkdir build | ||
$ cd build | $ cd build | ||
$ cmake .. | $ cmake .. | ||
$ make | $ make | ||
+ | ``` | ||
- | # Run | + | == Run== |
- | $ sudo ./ | + | |
+ | **MIPI Camera** | ||
+ | |||
+ | ``` | ||
+ | $ ./ | ||
+ | ``` | ||
+ | |||
+ | **USB Camera** | ||
+ | ``` | ||
+ | $ cd build | ||
+ | $ ./ | ||
``` | ``` | ||
- | '' | + | **TIP**: Replace |