This shows you the differences between two versions of the page.
— |
products:sbc:vim4:npu:demos:yolov8n-pose [2025/06/12 05:37] (current) louis created |
||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ~~tag> NPU YOLO OpenCV VIM4 ~~ | ||
+ | |||
+ | **Doc for version ddk-3.4.7.7** | ||
+ | |||
+ | ====== YOLOv8n-Pose OpenCV VIM4 Demo - 8 ====== | ||
+ | |||
+ | {{indexmenu_n> | ||
+ | |||
+ | ===== Introduction ===== | ||
+ | |||
+ | YOLOv8n-Pose inherits the powerful object detection backbone and neck architecture of YOLOv8n. It extends the standard YOLOv8n object detection model by integrating dedicated pose estimation layers onto its head. This allows it to not only detect people (bboxes) but also simultaneously predict the spatial positions (keypoints) of their anatomical joints (e.g., shoulders, elbows, knees, ankles). | ||
+ | |||
+ | Inference results on VIM4. | ||
+ | |||
+ | {{: | ||
+ | |||
+ | **Inference speed test**: USB camera about **90ms** per frame. | ||
+ | |||
+ | ===== Get Source Code ===== | ||
+ | |||
+ | Download YOLOv8 official code [[gh> | ||
+ | |||
+ | ```shell | ||
+ | $ git clone https:// | ||
+ | ``` | ||
+ | |||
+ | Refer '' | ||
+ | |||
+ | ===== Convert Model ===== | ||
+ | |||
+ | ==== Build virtual environment ==== | ||
+ | |||
+ | Follow Docker official documentation to install Docker: [[https:// | ||
+ | |||
+ | Follow the script below to get Docker image: | ||
+ | |||
+ | ```shell | ||
+ | docker pull numbqq/ | ||
+ | ``` | ||
+ | |||
+ | ==== Get Model Conversion Tools ==== | ||
+ | |||
+ | Get source [[gh> | ||
+ | |||
+ | ```shell | ||
+ | $ git lfs install | ||
+ | $ git lfs clone https:// | ||
+ | $ cd vim4_npu_sdk | ||
+ | $ ls | ||
+ | adla-toolkit-binary | ||
+ | ``` | ||
+ | |||
+ | * '' | ||
+ | * '' | ||
+ | * '' | ||
+ | |||
+ | <WRAP important> | ||
+ | If your kernel is older than 241129, please use branch npu-ddk-1.7.5.5. | ||
+ | </ | ||
+ | |||
+ | ==== Convert ==== | ||
+ | |||
+ | After training model, modify **Class Detect** and **Class Pose** in '' | ||
+ | |||
+ | ```diff head.py | ||
+ | diff --git a/ | ||
+ | index 0b02eb3..0a6e43a 100644 | ||
+ | --- a/ | ||
+ | +++ b/ | ||
+ | @@ -42,6 +42,9 @@ class Detect(nn.Module): | ||
+ | |||
+ | def forward(self, | ||
+ | """ | ||
+ | + if torch.onnx.is_in_onnx_export(): | ||
+ | + return self.forward_export(x) | ||
+ | + | ||
+ | shape = x[0].shape | ||
+ | for i in range(self.nl): | ||
+ | x[i] = torch.cat((self.cv2[i](x[i]), | ||
+ | @@ -80,6 +83,15 @@ class Detect(nn.Module): | ||
+ | | ||
+ | | ||
+ | |||
+ | + def forward_export(self, | ||
+ | + results = [] | ||
+ | + for i in range(self.nl): | ||
+ | + dfl = self.cv2[i](x[i]).contiguous() | ||
+ | + cls = self.cv3[i](x[i]).contiguous() | ||
+ | + results.append(torch.cat([cls, | ||
+ | + return tuple(results) | ||
+ | + | ||
+ | |||
+ | @@ -255,6 +283,16 @@ class Pose(Detect): | ||
+ | def forward(self, | ||
+ | """ | ||
+ | bs = x[0].shape[0] | ||
+ | - kpt = torch.cat([self.cv4[i](x[i]).view(bs, | ||
+ | + if torch.onnx.is_in_onnx_export(): | ||
+ | + kpt = [self.cv4[i](x[i]) for i in range(self.nl)] | ||
+ | + else: | ||
+ | + kpt = torch.cat([self.cv4[i](x[i]).view(bs, | ||
+ | x = self.detect(self, | ||
+ | + | ||
+ | + if torch.onnx.is_in_onnx_export(): | ||
+ | + output = [] | ||
+ | + for i in range(self.nl): | ||
+ | + output.append((torch.cat([x[i], | ||
+ | + return output | ||
+ | ``` | ||
+ | |||
+ | <WRAP important> | ||
+ | If you pip-installed ultralytics package, you should modify in package. | ||
+ | </ | ||
+ | |||
+ | Create a python file written as follows to export ONNX model. | ||
+ | |||
+ | ```python export.py | ||
+ | from ultralytics import YOLO | ||
+ | model = YOLO(" | ||
+ | results = model.export(format=" | ||
+ | ``` | ||
+ | |||
+ | ```shell | ||
+ | $ python export.py | ||
+ | ``` | ||
+ | |||
+ | <WRAP important> | ||
+ | Use [[https:// | ||
+ | |||
+ | {{: | ||
+ | </ | ||
+ | |||
+ | Enter '' | ||
+ | |||
+ | ```sh convert_adla.sh | ||
+ | #!/bin/bash | ||
+ | | ||
+ | ACUITY_PATH=../ | ||
+ | # | ||
+ | adla_convert=${ACUITY_PATH}adla_convert | ||
+ | |||
+ | |||
+ | if [ ! -e " | ||
+ | adla_convert=${ACUITY_PATH}adla_convert.py | ||
+ | fi | ||
+ | |||
+ | $adla_convert --model-type onnx \ | ||
+ | --model ./ | ||
+ | --inputs " | ||
+ | --input-shapes | ||
+ | --dtypes " | ||
+ | --quantize-dtype int16 --outdir onnx_output | ||
+ | --channel-mean-value " | ||
+ | --inference-input-type " | ||
+ | --inference-output-type " | ||
+ | --source-file dataset.txt | ||
+ | --batch-size 1 --target-platform PRODUCT_PID0XA003 | ||
+ | ``` | ||
+ | |||
+ | Run '' | ||
+ | |||
+ | ```shell | ||
+ | $ bash convert_adla.sh | ||
+ | ``` | ||
+ | |||
+ | ===== Run NPU ===== | ||
+ | |||
+ | ==== Get source code ==== | ||
+ | |||
+ | Clone the source code from our [[gh> | ||
+ | |||
+ | ```shell | ||
+ | $ git clone https:// | ||
+ | ``` | ||
+ | |||
+ | <WRAP important> | ||
+ | If your kernel is older than 241129, please use version before tag ddk-3.4.7.7. | ||
+ | </ | ||
+ | |||
+ | ==== Install dependencies ==== | ||
+ | |||
+ | ```shell | ||
+ | $ sudo apt update | ||
+ | $ sudo apt install libopencv-dev python3-opencv cmake | ||
+ | ``` | ||
+ | |||
+ | ==== Compile and run ==== | ||
+ | |||
+ | === Picture input demo === | ||
+ | |||
+ | Put '' | ||
+ | |||
+ | ```shell | ||
+ | # Compile | ||
+ | $ cd vim4_npu_applications/ | ||
+ | $ mkdir build | ||
+ | $ cd build | ||
+ | $ cmake .. | ||
+ | $ make | ||
+ | |||
+ | # Run | ||
+ | $ ./ | ||
+ | ``` | ||
+ | |||
+ | === Camera input demo === | ||
+ | |||
+ | Put '' | ||
+ | |||
+ | ```shell | ||
+ | # Compile | ||
+ | $ cd vim4_npu_applications/ | ||
+ | $ mkdir build | ||
+ | $ cd build | ||
+ | $ cmake .. | ||
+ | $ make | ||
+ | |||
+ | # Run | ||
+ | $ ./ | ||
+ | ``` | ||
+ | |||
+ | '' | ||
+ | |||