Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


products:sbc:vim3:npu:npu-sdk

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
products:sbc:vim3:npu:npu-sdk [2023/09/11 04:04]
hyphop [Conversion Tool]
products:sbc:vim3:npu:npu-sdk [2024/04/18 04:34] (current)
nick [Build Docker Environment]
Line 1: Line 1:
-~~tag>VIM3 VIM3L Amlogic NPU SDK tensorflow torch~~+~~tag>VIM3 VIM3L Amlogic NPU SDK tensorflow pytorch~~
  
-====== NPU SDK Usage ======+====== VIM3 NPU SDK Usage ======
  
-Basic information and example about how to use Amlogic NPU SDK for VIM3 +Basic information and examples about how to use Amlogic NPU SDK for VIM3.
  
-/*===== Introduction =====+===== Build Docker Environment =====
  
-This document is an introduction to the structure of the NPU SDK. +We provided a docker image which contains the required environment to convert the model.
-*/+
  
-===== Get SDK =====+Follow Docker official docs to install Docker: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]]. 
 + 
 +Follow the command below to get Docker image: 
 + 
 +```shell 
 +docker pull numbqq/npu-vim3 
 +``` 
 + 
 +===== Get NPU SDK =====
  
 Get source: [[gh>khadas/aml_npu_sdk]] Get source: [[gh>khadas/aml_npu_sdk]]
  
 ```shell ```shell
-mkdir workspace && cd workspace +mkdir workspace && cd workspace 
-git clone --recursive https://github.com/khadas/aml_npu_sdk+git clone --recursive https://github.com/khadas/aml_npu_sdk
 ``` ```
  
 ===== SDK Structure ===== ===== SDK Structure =====
  
-Enter the SDK directory,+Enter inside SDK directory ''aml_npu_sdk''
  
 ```shell ```shell
 $ cd aml_npu_sdk $ cd aml_npu_sdk
 $ ls $ ls
-acuity-toolkit  android_sdk  Dockerfile  docs  LICENSE  linux_sdk  README.md+acuity-toolkit  android_sdk  convert-in-docker.sh  Dockerfile  docs  LICENSE  linux_sdk  README.md
 ``` ```
  
Line 32: Line 39:
  
 ``` ```
-acuity-toolkit    #Conversion tool , used to convert AI models +acuity-toolkit    # Conversion tool, used to convert AI models 
-android_sdk       #Android SDK +android_sdk       # Android SDK 
-docs              #Conversion related documents collection+docs              # Conversion-related documents collection
 ``` ```
  
-<WRAP tip +<WRAP info 
-Since all linux codes have been supports local compiled, host compilation is no longer supported. Therefore, the contents of ''linux_sdk'' have been completely removed.+Since all linux code can now be locally compiled on the device, host compilation is no longer supported. Therefore, the contents of ''linux_sdk'' have been completely removed.
 </WRAP> </WRAP>
  
Line 46: Line 53:
  
 ```shell ```shell
-cd aml_npu_sdk/acuity-toolkit +ls acuity-toolkit
-$ ls+
 bin  demo  python  ReadMe.txt  requirements.txt bin  demo  python  ReadMe.txt  requirements.txt
 ``` ```
Line 63: Line 69:
 ``` ```
  
-==== Dependencies Installation ====+===== Convert Model =====
  
-The environment package dependencies required by the conversion tool can be installed directly on the PC or installed through the [[https://docs.python.org/3/library/venv.html | virtual environment]].+Convert the demo model in docker:
  
 ```shell ```shell
-$ cd aml_npu_sdk/acuity-toolkit +./convert-in-docker.sh
-$ cat requirements.txt +
-tensorflow==2.0.0 +
-astor==0.8.0 +
-numpy==1.18.0 +
-scipy==1.1.0 +
-Pillow==5.3.0 +
-protobuf==3.11.2 +
-networkx>=1.11 +
-image==1.5.5 +
-lmdb==0.93 +
-onnx==1.6.0 +
-h5py==2.10.0 +
-flatbuffers==1.10 +
-matplotlib==2.1.0 +
-dill==0.2.8.2 +
-ruamel.yaml==0.15.81 +
-ply==3.11 +
-torch==1.2.0+
 ``` ```
  
-Here, ''tensorflow==2.0.0'' can be replaced by ''tensorflow==2.0.0a0''.+The script ''convert-in-docker.sh'' will enter docker container and then execute the conversion scripts below: 
 +  * acuity-toolkit/demo/0_import_model.sh 
 +  * acuity-toolkit/demo/1_quantize_model.sh 
 +  * acuity-toolkit/demo/2_export_case_code.sh 
  
 ==== Conversion Scripts ==== ==== Conversion Scripts ====
  
-The conversion script is in the ''acuity-toolkit/demo'' directory,+The conversion scripts are in the ''acuity-toolkit/demo'' directory,
  
 ```shell ```shell
-cd aml_npu_sdk/acuity-toolkit/demo +ls acuity-toolkit/demo/*.sh -1 
-$ ls -1 +acuity-toolkit/demo/0_import_model.sh 
-data +acuity-toolkit/demo/1_quantize_model.sh 
-model +acuity-toolkit/demo/2_export_case_code.sh 
-0_import_model.sh +acuity-toolkit/demo/inference.sh
-1_quantize_model.sh +
-2_export_case_code.sh +
-extractoutput.py +
-inference.sh+
 ``` ```
  
-Use scripts to convert AI models.+  * ''0_import_model.sh'' - Import model script. Now, it supports loading Tensorflow, Caffe, Tensorflow Lite, Onnx, Keras, Pytorch, and Darknet models
 +  * ''1_quantize_model.sh'' - Quantize model script. Now, it can quantize model in ''int8'', ''int16'' and ''uint8''
 +  * ''2_export_case_code.sh'' - Export model script. If you use ''VIM3'', please set ''optimize'' to ''VIPNANOQI_PID0X88''. If use ''VIM3L'', please set ''optimize'' to ''VIPNANOQI_PID0X99''.
  
-```shell +<WRAP tip > 
-$ cd aml_npu_sdk/acuity-toolkit/demo +If you want to convert your own model, just modify converion scripts ''0_import_model.sh'', ''1_quantize_model.sh'', ''2_export_case_code.sh'' and  then execute ''./convert-in-docker.sh'' to convert your model. 
-$ bash 0_import_model.sh && bash 1_quantize_model.sh && bash 2_export_case_code.sh  +</WRAP>
-```+
  
-  * ''0_import_model.sh'' - Import model script. Now, it supports loading Tensorflow, Caffe, Tensorflow Lite, Onnx, Keras, Pytorch and Darknet model. 
-  * ''1_quantize_model.sh'' - Quantize model script. Now, it can quantize model in int8, int16 and uint8. 
-  * ''2_export_case_code.sh'' - Export model script. If you use ''VIM3'', please set ''optimize'' to ''VIPNANOQI_PID0X88''. If use ''VIM3L'', please set ''optimize'' to ''VIPNANOQI_PID0X99''. 
  
 After the conversion is completed, you can see the converted code in the ''xxxx_nbg_unify'' directory. Converted model is ''xxxx.nb''. Here is the built-in model as an example. After the conversion is completed, you can see the converted code in the ''xxxx_nbg_unify'' directory. Converted model is ''xxxx.nb''. Here is the built-in model as an example.
  
 ```shell ```shell
-cd aml_npu_sdk/acuity-toolkit/demo/mobilenet_tf_nbg_unify +ls acuity-toolkit/demo/mobilenet_tf_nbg_unify
-$ ls+
 BUILD BUILD
 makefile.linux makefile.linux
Line 146: Line 131:
 For the conversion parameters and settings, please refer to: For the conversion parameters and settings, please refer to:
   * [[gh>khadas/aml_npu_sdk/tree/master/docs/en]]   * [[gh>khadas/aml_npu_sdk/tree/master/docs/en]]
-  * [[https://github.com/khadas/aml_npu_sdk/blob/master/docs/en/Model%20Transcoding%20and%20Running%20User%20Guide%20(1.0).pdf|Model Transcoding and Running User Guide]]+  * [[gh>khadas/aml_npu_sdk/blob/master/docs/en/Model Transcoding and Running User Guide (1.0).pdf|Model Transcoding and Running User Guide]]
  
Last modified: 2023/09/11 04:04 by hyphop