Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


Sidebar

products:sbc:edge2:npu:demos:yolov7-tiny

YOLOv7-tiny Edge2 Demo - 1

Train Model

Download the YOLOv7 official code WongKinYiu/yolov7.

$ git clone https://github.com/WongKinYiu/yolov7.git

Refer README.md to create and train a YOLOv7 tiny model.

Convert Model

Build virtual environment

The SDK only supports python3.6 or python3.8, here is an example of creating a virtual environment for python3.8.

Install python packages.

$ sudo apt update
$ sudo apt install python3-dev python3-numpy

Follow this docs to install conda.

Then create a virtual environment.

$ conda create -n npu-env python=3.8
$ conda activate npu-env     #activate
$ conda deactivate           #deactivate

Get convert tool

Download Tool from rockchip-linux/rknn-toolkit2.

$ git clone https://github.com/rockchip-linux/rknn-toolkit2.git
$ git checkout 9ad79343fae625f4910242e370035fcbc40cc31a

Install dependences and RKNN toolkit2 packages.

$ cd rknn-toolkit2
$ sudo apt-get install python3 python3-dev python3-pip
$ sudo apt-get install libxslt1-dev zlib1g-dev libglib2.0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc cmake
$ pip3 install -r doc/requirements_cp38-*.txt
$ pip3 install packages/rknn_toolkit2-*-cp38-cp38-linux_x86_64.whl

Convert

After training model, run export.py to convert model from PT to ONNX.

Enter rknn-toolkit2/examples/onnx/yolov5 and modify test.py as follows.

test.py
# Create RKNN object
rknn = RKNN(verbose=True)
 
# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')
print('done')
 
# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model='./yolov7_tiny.onnx')
if ret != 0:
    print('Load model failed!')
    exit(ret)
print('done')
 
# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset='./dataset.txt')
if ret != 0:
    print('Build model failed!')
    exit(ret)
print('done')
 
# Export RKNN model
print('--> Export rknn model')
ret = rknn.export_rknn('./yolov7_tiny.rknn')
if ret != 0:
    print('Export rknn model failed!')
    exit(ret)
print('done')

Run test.py to generate RKNN model.

$ python3 test.py

Run NPU

Get source code

Clone the source code from our khadas/edge2-npu.

$ git clone https://github.com/khadas/edge2-npu

Install dependencies

$ sudo apt update
$ sudo apt install cmake libopencv-dev

Compile and run

Picture input demo

Put yolov7_tiny.rknn in edge2-npu/C++/yolov7_tiny/data/model

# Compile
$ bash build.sh
 
# Run
$ cd install/yolov7_tiny
$ ./yolov7_tiny data/model/yolov7_tiny.rknn data/img/bus.jpg

Camera input demo

Put yolov7_tiny.rknn in edge2-npu/C++/yolov7_tiny_cap/data/model

# Compile
$ bash build.sh
 
# Run
$ cd install/yolov7_tiny
$ ./yolov7_tiny data/model/yolov7_tiny.rknn 33

33 is camera device index.

If your YOLOv7 tiny model classes are not the same as COCO, please change data/coco_80_labels_list.txt and the OBJ_CLASS_NUM in include/postprocess.h.

Last modified: 2023/09/20 04:08 by hyphop