~~tag>VIM3 VIM3L Amlogic NPU SDK tensorflow pytorch~~
====== VIM3 NPU SDK Usage ======
Basic information and examples about how to use Amlogic NPU SDK for VIM3.
===== Build Docker Environment =====
We provided a docker image which contains the required environment to convert the model.
Follow Docker official docs to install Docker: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]].
Follow the command below to get Docker image:
```shell
docker pull numbqq/npu-vim3
```
===== Get NPU SDK =====
Get source: [[gh>khadas/aml_npu_sdk]]
```shell
mkdir workspace && cd workspace
git clone --recursive https://github.com/khadas/aml_npu_sdk
```
===== SDK Structure =====
Enter inside SDK directory ''aml_npu_sdk''
```shell
$ cd aml_npu_sdk
$ ls
acuity-toolkit android_sdk convert-in-docker.sh Dockerfile docs LICENSE linux_sdk README.md
```
The SDK contains the Android SDK, conversion and compilation tools, and manuals.
```
acuity-toolkit # Conversion tool, used to convert AI models
android_sdk # Android SDK
docs # Conversion-related documents collection
```
Since all linux code can now be locally compiled on the device, host compilation is no longer supported. Therefore, the contents of ''linux_sdk'' have been completely removed.
===== Conversion Tool =====
''acuity-toolkit'' directory contains the conversion tool,
```shell
$ ls acuity-toolkit
bin demo python ReadMe.txt requirements.txt
```
''demo'' directory is where we can do model conversion,
```
bin # Conversion is a collection of various tools used, most of which are not open source.
demo # Conversion script directory, convert AI model location
demo_hybird # Mixed Input Conversion Tool
mulity_input_demo # mulity input demo
python # Used to convert the model and data corresponding to the Python API
ReadMe.txt # ReadMe.txt file explains how to convert and use
requirements.txt # Conversion tool dependent environment
```
===== Convert Model =====
Convert the demo model in docker:
```shell
./convert-in-docker.sh
```
The script ''convert-in-docker.sh'' will enter docker container and then execute the conversion scripts below:
* acuity-toolkit/demo/0_import_model.sh
* acuity-toolkit/demo/1_quantize_model.sh
* acuity-toolkit/demo/2_export_case_code.sh
==== Conversion Scripts ====
The conversion scripts are in the ''acuity-toolkit/demo'' directory,
```shell
$ ls acuity-toolkit/demo/*.sh -1
acuity-toolkit/demo/0_import_model.sh
acuity-toolkit/demo/1_quantize_model.sh
acuity-toolkit/demo/2_export_case_code.sh
acuity-toolkit/demo/inference.sh
```
* ''0_import_model.sh'' - Import model script. Now, it supports loading Tensorflow, Caffe, Tensorflow Lite, Onnx, Keras, Pytorch, and Darknet models.
* ''1_quantize_model.sh'' - Quantize model script. Now, it can quantize model in ''int8'', ''int16'' and ''uint8''.
* ''2_export_case_code.sh'' - Export model script. If you use ''VIM3'', please set ''optimize'' to ''VIPNANOQI_PID0X88''. If use ''VIM3L'', please set ''optimize'' to ''VIPNANOQI_PID0X99''.
If you want to convert your own model, just modify converion scripts ''0_import_model.sh'', ''1_quantize_model.sh'', ''2_export_case_code.sh'' and then execute ''./convert-in-docker.sh'' to convert your model.
After the conversion is completed, you can see the converted code in the ''xxxx_nbg_unify'' directory. Converted model is ''xxxx.nb''. Here is the built-in model as an example.
```shell
$ ls acuity-toolkit/demo/mobilenet_tf_nbg_unify
BUILD
makefile.linux
mobilenettf.vcxproj
main.c
mobilenet_tf.nb
nbg_meta.json
vnn_global.h
vnn_mobilenettf.h
vnn_post_process.h
vnn_pre_process.h
vnn_mobilenettf.c
vnn_post_process.c
vnn_pre_process.c
```
If your model's input is not a three-channel image, please convert input data to ''npy'' format.
==== Conversion Parameters ====
For the conversion parameters and settings, please refer to:
* [[gh>khadas/aml_npu_sdk/tree/master/docs/en]]
* [[gh>khadas/aml_npu_sdk/blob/master/docs/en/Model Transcoding and Running User Guide (1.0).pdf|Model Transcoding and Running User Guide]]