Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


products:sbc:vim3:npu:ksnn:demos:yolov8n

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
products:sbc:vim3:npu:ksnn:demos:yolov8n [2023/12/22 03:41]
louis created
products:sbc:vim3:npu:ksnn:demos:yolov8n [2025/04/23 22:35] (current)
louis
Line 1: Line 1:
 ~~tag> NPU YOLO KSNN VIM3 ~~ ~~tag> NPU YOLO KSNN VIM3 ~~
  
-====== YOLOv8n KSNN Demo - ======+====== YOLOv8n KSNN Demo - ======
  
 {{indexmenu_n>2}} {{indexmenu_n>2}}
 +
 +===== Introduction =====
 +
 +YOLOv8n is an object detection model. It uses bounding boxes to precisely draw each object in image.
 +
 +Inference results on VIM3.
 +
 +{{:products:sbc:vim3:npu:ksnn:demos:yolov8n-ksnn-result.jpg?800|}}
 +
 +**Inference speed test**: USB camera about **182ms** per frame. MIPI camera about **156ms** per frame.
  
 ===== Train the model ===== ===== Train the model =====
Line 13: Line 23:
 ``` ```
  
-Refer ''README.md'' to create and train a YOLOv8n model.+Refer ''README.md'' to create and train a YOLOv8n model. My version ''torch==1.10.1'' and ''ultralytics==8.0.86''.
  
 ===== Convert the model ===== ===== Convert the model =====
Line 20: Line 30:
  
 ```shell ```shell
-$ git clone --recursive https://github.com/khadas/aml_npu_sdk.git+$ git lfs install 
 +$ git lfs clone https://github.com/khadas/aml_npu_sdk
 ``` ```
  
Line 34: Line 45:
 After training the model, modify ''ultralytics/ultralytics/nn/modules/head.py'' as follows. After training the model, modify ''ultralytics/ultralytics/nn/modules/head.py'' as follows.
  
-```diff+```diff head.py
 diff --git a/ultralytics/nn/modules/head.py b/ultralytics/nn/modules/head.py diff --git a/ultralytics/nn/modules/head.py b/ultralytics/nn/modules/head.py
 index 0b02eb3..0a6e43a 100644 index 0b02eb3..0a6e43a 100644
Line 62: Line 73:
 + +
 ``` ```
 +
 +<WRAP important>
 +If you pip-installed ultralytics package, you should modify in package.
 +</WRAP>
  
 Create a python file written as follows to export ONNX model. Create a python file written as follows to export ONNX model.
Line 74: Line 89:
 $ python export.py $ python export.py
 ``` ```
 +
 +<WRAP important>
 +Use [[https://netron.app/ | Netron]] to check your model output like this. If not, please check your ''head.py''.
 +
 +{{:products:sbc:vim3:npu:ksnn:yolov8n-vim3-ksnn-output.png?600|}}
 +</WRAP>
  
 Enter ''aml_npu_sdk/acuity-toolkit/python'' and run command as follows. Enter ''aml_npu_sdk/acuity-toolkit/python'' and run command as follows.
  
 ```shell ```shell
 +# uint8
 $ ./convert --model-name yolov8n \ $ ./convert --model-name yolov8n \
             --platform onnx \             --platform onnx \
Line 84: Line 106:
             --quantized-dtype asymmetric_affine \             --quantized-dtype asymmetric_affine \
             --source-files ./data/dataset/dataset0.txt \             --source-files ./data/dataset/dataset0.txt \
 +            --batch-size 1 \
 +            --iterations 1 \
             --kboard VIM3 --print-level 0              --kboard VIM3 --print-level 0 
 ``` ```
  
-If you use ''VIM3L'' , please use ''VIM3L'' to replace ''VIM3''+<WRAP important> 
 +Now KSNN only supports ''batch-size'' = 1. 
 +</WRAP> 
 + 
 +If you want to use more quantified images, please modify ''batch-size'' and ''iterations''. ''batch-size''×''iterations''=number of quantified images. The number of quantified images has better between 200 and 500. 
 + 
 +If you use ''VIM3L'' , please use ''VIM3L'' to replace ''VIM3''.
  
 If run succeed, converted model and library will generate in ''outputs/yolov8n''. If run succeed, converted model and library will generate in ''outputs/yolov8n''.
 +
 +<WRAP important>
 +If your YOLOv8 model perform bad on board, please try quanfity model in int8 or int16.
 +```shell
 +# int8
 +$ ./convert --model-name yolov8n \
 +            --platform onnx \
 +            --model yolov8n.onnx \
 +            --mean-values '0 0 0 0.00392156' \
 +            --quantized-dtype dynamic_fixed_point \
 +            --qtype int8 \
 +            --source-files ./data/dataset/dataset0.txt \
 +            --batch-size 1 \
 +            --iterations 1 \
 +            --kboard VIM3 --print-level 0 
 +
 +# int16
 +$ ./convert --model-name yolov8n \
 +            --platform onnx \
 +            --model yolov8n.onnx \
 +            --mean-values '0 0 0 0.00392156' \
 +            --quantized-dtype dynamic_fixed_point \
 +            --qtype int16 \
 +            --source-files ./data/dataset/dataset0.txt \
 +            --batch-size 1 \
 +            --iterations 1 \
 +            --kboard VIM3 --print-level 0 
 +```
 +</WRAP>
  
 ===== Run inference on the NPU by KSNN ===== ===== Run inference on the NPU by KSNN =====
Line 112: Line 171:
  
 Put ''yolov8n.nb'' and ''libnn_yolov8n.so'' into ''ksnn/examples/yolov8n/models/VIM3'' and ''ksnn/examples/yolov8n/libs'' Put ''yolov8n.nb'' and ''libnn_yolov8n.so'' into ''ksnn/examples/yolov8n/models/VIM3'' and ''ksnn/examples/yolov8n/libs''
 +
 +If your model's classes is not 80, please remember to modify the parameter, ''LISTSIZE''
 +
 +```shell
 +LISTSIZE = classes number + 64
 +```
  
 ==== Picture input demo ==== ==== Picture input demo ====
Line 121: Line 186:
  
 === Camera input demo === === Camera input demo ===
 +
 +For USB camera.
 +
 +```shell
 +# usb
 +$ cd ksnn/examples/yolov8n
 +$ python3 yolov8n-cap.py --model ./models/VIM3/yolov8n_uint8.nb --library ./libs/libnn_yolov8n_uint8.so --type usb --device 0
 +```
 +
 +For MIPI camera, OpenCV do not support GSTREAMER by **pip install**. So you need to install OpenCV by **sudo apt install**.
  
 ```shell ```shell
 +# mipi
 +$ pip3 uninstall opencv-python numpy
 +$ sudo apt install python3-opencv
 +$ pip3 install numpy==1.23
 $ cd ksnn/examples/yolov8n $ cd ksnn/examples/yolov8n
-$ python3 yolov8n-cap.py --model ./models/VIM3/yolov8n.nb --library ./libs/libnn_yolov8n.so --device 0+$ python3 yolov8n-cap.py --model ./models/VIM3/yolov8n_uint8.nb --library ./libs/libnn_yolov8n_uint8.so --type mipi --device 50
 ``` ```
  
-''0'' is the camera device index.+''0'' and ''50'' are the camera device index.
Last modified: 2023/12/22 03:41 by louis