Basic information and examples about how to use Amlogic NPU SDK for VIM3.
We provided a docker image which contains the required environment to convert the model.
Follow Docker official docs to install Docker: Install Docker Engine on Ubuntu.
Follow the command below to get Docker image:
docker pull numbqq/npu-vim3
Get source: khadas/aml_npu_sdk
mkdir workspace && cd workspace git clone --recursive https://github.com/khadas/aml_npu_sdk
Enter inside SDK directory aml_npu_sdk
$ cd aml_npu_sdk $ ls acuity-toolkit android_sdk convert-in-docker.sh Dockerfile docs LICENSE linux_sdk README.md
The SDK contains the Android SDK, conversion and compilation tools, and manuals.
acuity-toolkit # Conversion tool, used to convert AI models android_sdk # Android SDK docs # Conversion-related documents collection
Since all linux code can now be locally compiled on the device, host compilation is no longer supported. Therefore, the contents of linux_sdk
have been completely removed.
acuity-toolkit
directory contains the conversion tool,
$ ls acuity-toolkit
bin demo python ReadMe.txt requirements.txt
demo
directory is where we can do model conversion,
bin # Conversion is a collection of various tools used, most of which are not open source. demo # Conversion script directory, convert AI model location demo_hybird # Mixed Input Conversion Tool mulity_input_demo # mulity input demo python # Used to convert the model and data corresponding to the Python API ReadMe.txt # ReadMe.txt file explains how to convert and use requirements.txt # Conversion tool dependent environment
Convert the demo model in docker:
./convert-in-docker.sh
The script convert-in-docker.sh
will enter docker container and then execute the conversion scripts below:
The conversion scripts are in the acuity-toolkit/demo
directory,
$ ls acuity-toolkit/demo/*.sh -1
acuity-toolkit/demo/0_import_model.sh
acuity-toolkit/demo/1_quantize_model.sh
acuity-toolkit/demo/2_export_case_code.sh
acuity-toolkit/demo/inference.sh
0_import_model.sh
- Import model script. Now, it supports loading Tensorflow, Caffe, Tensorflow Lite, Onnx, Keras, Pytorch, and Darknet models.1_quantize_model.sh
- Quantize model script. Now, it can quantize model in int8
, int16
and uint8
.2_export_case_code.sh
- Export model script. If you use VIM3
, please set optimize
to VIPNANOQI_PID0X88
. If use VIM3L
, please set optimize
to VIPNANOQI_PID0X99
.
If you want to convert your own model, just modify converion scripts 0_import_model.sh
, 1_quantize_model.sh
, 2_export_case_code.sh
and then execute ./convert-in-docker.sh
to convert your model.
After the conversion is completed, you can see the converted code in the xxxx_nbg_unify
directory. Converted model is xxxx.nb
. Here is the built-in model as an example.
$ ls acuity-toolkit/demo/mobilenet_tf_nbg_unify
BUILD
makefile.linux
mobilenettf.vcxproj
main.c
mobilenet_tf.nb
nbg_meta.json
vnn_global.h
vnn_mobilenettf.h
vnn_post_process.h
vnn_pre_process.h
vnn_mobilenettf.c
vnn_post_process.c
vnn_pre_process.c
If your model's input is not a three-channel image, please convert input data to npy
format.
For the conversion parameters and settings, please refer to: