Download the YOLOv7 official code. WongKinYiu/yolov7
$ git clone https://github.com/WongKinYiu/yolov7
Refer README.md
to create and train a YOLOv7 tiny model.
$ git lfs install $ git lfs clone --recursive https://github.com/khadas/aml_npu_sdk.git
The KSNN conversion tool is under acuity-toolkit/python
.
$ cd aml_npu_sdk/acuity-toolkit/python && ls $ convert data outputs
After training the model, modify yolov7/models/yolo.py
as follows.
diff --git a/models/yolo.py b/models/yolo.py index 95a019c..a2e611d 100644 --- a/models/yolo.py +++ b/models/yolo.py @@ -144,7 +144,7 @@ class IDetect(nn.Module): x[i] = self.m[i](x[i]) # conv bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() + # x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() if not self.training: # inference if self.grid[i].shape[2:4] != x[i].shape[2:4]:
yolo.py has many forward. Right place is class IDetect function fuseforward.
Then, run export.py
to convert the model to ONNX.
$ python export.py
Enter aml_npu_sdk/acuity-toolkit/python
and run command as follows.
$ ./convert --model-name yolov7_tiny \
--platform onnx \
--model yolov7_tiny.onnx \
--mean-values '0 0 0 0.00392156' \
--quantized-dtype asymmetric_affine \
--source-files ./data/dataset/dataset0.txt \
--batch-size 1 \
--iterations 1 \
--kboard VIM3 --print-level 0
If you want to use more quantified images, please modify batch-size
and iterations
. batch-size
×iterations
=number of quantified images.
If you use VIM3L
, please use VIM3L
to replace VIM3
.
If run succeed, converted model and library will generate in outputs/yolov7_tiny
.
Download KSNN library and demo code. khadas/ksnn
$ git clone --recursive https://github.com/khadas/ksnn.git $ cd ksnn/ksnn $ pip3 install ksnn-1.3-py3-none-any.whl
If your kernel version is 5.15, use ksnn-1.4-py3-none-any.whl
instead of ksnn-1.3-py3-none-any.whl
.
$ pip3 install matplotlib
Put yolov7_tiny.nb
and libnn_yolov7_tiny.so
into ksnn/examples/yolov7_tiny/models/VIM3
and ksnn/examples/yolov7_tiny/libs
If your model's classes is not 80, please remember to modify the parameter, LISTSIZE
.
LISTSIZE = classes number + 5
$ cd ksnn/examples/yolov7_tiny $ python3 yolov7_tiny-picture.py --model ./models/VIM3/yolov7_tiny.nb --library ./libs/libnn_yolov7_tiny.so --picture ./data/horses.jpg --level 0
$ cd ksnn/examples/yolov7_tiny $ python3 yolov7_tiny-cap.py --model ./models/VIM3/yolov7_tiny.nb --library ./libs/libnn_yolov7_tiny.so --device 0
0
is the camera device index.