This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
products:sbc:vim4:npu:demos:yolov7-tiny [2024/01/04 04:46] louis |
products:sbc:vim4:npu:demos:yolov7-tiny [2025/06/12 05:39] (current) louis |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ~~tag> NPU YOLO OpenCV VIM4 ~~ | ~~tag> NPU YOLO OpenCV VIM4 ~~ | ||
| + | |||
| + | **Doc for version ddk-3.4.7.7** | ||
| ====== YOLOv7-tiny VIM4 Demo - 1 ====== | ====== YOLOv7-tiny VIM4 Demo - 1 ====== | ||
| {{indexmenu_n> | {{indexmenu_n> | ||
| + | |||
| + | ===== Introduction ===== | ||
| + | |||
| + | YOLOv7-Tiny is an object detection model. It uses bounding boxes to precisely draw each object in image. | ||
| + | |||
| + | Inference results on VIM4. | ||
| + | |||
| + | {{: | ||
| + | |||
| + | **Inference speed test**: USB camera about **126ms** per frame. | ||
| ===== Train the model ===== | ===== Train the model ===== | ||
| Line 20: | Line 32: | ||
| Follow Docker official documentation to install Docker: [[https:// | Follow Docker official documentation to install Docker: [[https:// | ||
| - | Then fetch the prebuilt NPU Docker | + | Follow |
| ```shell | ```shell | ||
| - | $ docker pull yanwyb/npu:v1 | + | docker pull numbqq/npu-vim4 |
| - | $ docker run -it --name | + | |
| - | -v / | + | |
| - | -v / | + | |
| - | yanwyb/ | + | |
| ``` | ``` | ||
| ==== Get the conversion tool ==== | ==== Get the conversion tool ==== | ||
| - | Download The conversion tool from [[gl> | + | Download The conversion tool from [[gh> |
| ```shell | ```shell | ||
| - | $ git clone https://gitlab.com/ | + | $ git lfs install |
| + | $ git lfs clone https://github.com/ | ||
| + | $ cd vim4_npu_sdk | ||
| + | $ ls | ||
| + | adla-toolkit-binary | ||
| ``` | ``` | ||
| + | |||
| + | * '' | ||
| + | * '' | ||
| + | * '' | ||
| + | |||
| + | <WRAP important> | ||
| + | If your kernel is older than 241129, please use branch npu-ddk-1.7.5.5. | ||
| + | </ | ||
| ==== Convert ==== | ==== Convert ==== | ||
| Line 56: | Line 76: | ||
| if self.grid[i].shape[2: | if self.grid[i].shape[2: | ||
| ``` | ``` | ||
| + | |||
| + | <WRAP important> | ||
| + | yolo.py has many forward. Right place is class **IDetect** function **fuseforward**. | ||
| + | </ | ||
| Then, run '' | Then, run '' | ||
| Line 81: | Line 105: | ||
| --inputs " | --inputs " | ||
| --input-shapes | --input-shapes | ||
| - | --inference-input-type float32 \ | ||
| - | --inference-output-type float32 \ | ||
| --dtypes " | --dtypes " | ||
| --quantize-dtype int8 --outdir onnx_output | --quantize-dtype int8 --outdir onnx_output | ||
| --channel-mean-value " | --channel-mean-value " | ||
| + | --inference-input-type " | ||
| + | --inference-output-type " | ||
| --source-file dataset.txt | --source-file dataset.txt | ||
| - | --iterations 500 \ | ||
| - | --disable-per-channel False \ | ||
| --batch-size 1 --target-platform PRODUCT_PID0XA003 | --batch-size 1 --target-platform PRODUCT_PID0XA003 | ||
| ``` | ``` | ||
| - | |||
| - | <WRAP important > | ||
| - | Please prepare about 500 pictures for quantification. If the pictures size is smaller than model input size, please resize pictures to input size before quantification. | ||
| - | </ | ||
| Run '' | Run '' | ||
| Line 110: | Line 128: | ||
| ```shell | ```shell | ||
| $ git clone https:// | $ git clone https:// | ||
| + | |||
| ``` | ``` | ||
| - | <WRAP important > | + | <WRAP important> |
| - | If your kernel | + | If your kernel is older than 241129, please use version before |
| </ | </ | ||
| Line 138: | Line 157: | ||
| # Run | # Run | ||
| - | $ sudo ./ | + | $ ./ |
| ``` | ``` | ||
| Line 154: | Line 173: | ||
| # Run | # Run | ||
| - | $ sudo ./ | + | $ ./ |
| ``` | ``` | ||