Khadas Docs

Amazing Khadas, always amazes you!

User Tools

Site Tools


products:sbc:vim3:npu:npu-sdk

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
products:sbc:vim3:npu:npu-sdk [2022/09/16 09:40]
frank created
products:sbc:vim3:npu:npu-sdk [2023/09/15 11:49]
sravan [Conversion Parameters]
Line 1: Line 1:
 +~~tag>VIM3 VIM3L Amlogic NPU SDK tensorflow pytorch~~
 +
 ====== NPU SDK Usage ====== ====== NPU SDK Usage ======
  
-===== Introduction =====+Basic information and examples about how to use Amlogic NPU SDK for VIM3 
  
-This document mainly introduces how to structure the NPU SDK.+/*===== Introduction =====
  
 +This document is an introduction to the structure of the NPU SDK.
 +*/
  
 ===== Get SDK ===== ===== Get SDK =====
  
-You can clone the SDK from our Github and Gitlab.+Get source: [[gh>khadas/aml_npu_sdk]]
  
 ```shell ```shell
 $ mkdir workspace && cd workspace $ mkdir workspace && cd workspace
-$ git clone --recursive https://github.com/khadas/aml_npu_sdk.git+$ git clone --recursive https://github.com/khadas/aml_npu_sdk
 ``` ```
  
 ===== SDK Structure ===== ===== SDK Structure =====
  
-Enter the SDK directory,+Enter inside SDK directory ''aml_npu_sdk''
  
 ```shell ```shell
-$ cd {workspace}/aml_npu_sdk+$ cd aml_npu_sdk
 $ ls $ ls
 acuity-toolkit  android_sdk  Dockerfile  docs  LICENSE  linux_sdk  README.md acuity-toolkit  android_sdk  Dockerfile  docs  LICENSE  linux_sdk  README.md
 ``` ```
  
-The SDK is mainly divided into several ''SDK''''conversion tools'' and ''compilation tools'', and ''docs''.+The SDK contains the Android SDK, conversion and compilation tools, and manuals.
  
 ``` ```
-acuity-toolkit    #Conversion tool , used to convert AI models +acuity-toolkit    # Conversion tool, used to convert AI models 
-android_sdk       #Android SDK +android_sdk       # Android SDK 
-docs              #Conversion related documents collection+docs              # Conversion-related documents collection
 ``` ```
  
-<WRAP tip +<WRAP info 
-Since all linux codes have been supports local compiled, host compilation is no longer supported. Therefore, the contents of ''linux_sdk'' have been completely removed.+Since all linux code can now be locally compiled on the device, host compilation is no longer supported. Therefore, the contents of ''linux_sdk'' have been completely removed.
 </WRAP> </WRAP>
- 
-===== Docs ===== 
- 
-Enter the Docs directory, ''DDK_Application_Guide_0.7.pdf'' describes each document. The document records a series of processes from conversion to integration, as well as some common problems. 
  
 ===== Conversion Tool ===== ===== Conversion Tool =====
  
-''acuity-toolkit'' is the conversion tool directory,+''acuity-toolkit'' directory contains the conversion tool,
  
 ```shell ```shell
-$ cd {workspace}/aml_npu_sdk/acuity-toolkit+$ cd aml_npu_sdk/acuity-toolkit
 $ ls $ ls
 bin  demo  python  ReadMe.txt  requirements.txt bin  demo  python  ReadMe.txt  requirements.txt
 ``` ```
  
-The main directory of interest is ''demo'',+''demo'' directory is where we can do model conversion,
  
 ``` ```
-1. bin                   #Conversion is a collection of various tools used, most of which are not open source. +bin                   # Conversion is a collection of various tools used, most of which are not open source. 
-2. demo                  #Conversion script directory, convert AI model location +demo                  # Conversion script directory, convert AI model location 
-3. demo_hybird           #Mixed Input Conversion Tool +demo_hybird           # Mixed Input Conversion Tool 
-4. mulity_input_demo     #mulity input demo +mulity_input_demo     # mulity input demo 
-5. python                #Used to convert the model and data corresponding to the python API +python                # Used to convert the model and data corresponding to the Python API 
-6. ReadMe.txt            #ReadMe.txt file explains how to convert and use +ReadMe.txt            # ReadMe.txt file explains how to convert and use 
-7. requirements.txt      #Conversion tool dependent environment+requirements.txt      # Conversion tool dependent environment
 ``` ```
  
-==== Dependent Installation ====+==== Dependencies Installation ====
  
-The environment dependency package required by the conversion tool can be installed directly on the PC or installed through the virtual environment ''virtualenv''.+The environment package dependencies required by the conversion tool can be installed directly on the PC or installed through the [[https://docs.python.org/3/library/venv.html | virtual environment]].
  
 ```shell ```shell
-$ cd {workspace}/aml_npu_sdk/acuity-toolkit+$ cd aml_npu_sdk/acuity-toolkit
 $ cat requirements.txt $ cat requirements.txt
 tensorflow==2.0.0 tensorflow==2.0.0
Line 89: Line 89:
 ``` ```
  
-Among them, ''tensorflow==2.0.0'' can be replaced by ''tensorflow==2.0.0a0''.+Here, ''tensorflow==2.0.0'' can be replaced by ''tensorflow==2.0.0a0''.
  
-==== Conversion Script ====+==== Conversion Scripts ====
  
-The conversion script is in the `demodirectory,+The conversion script is in the ''acuity-toolkit/demo'' directory,
  
 ```shell ```shell
-$ cd {workspace}/aml_npu_sdk/acuity-toolkit/demo +$ cd aml_npu_sdk/acuity-toolkit/demo 
-$ ls +$ ls -1 
-0_import_model.sh  1_quantize_model.sh  2_export_case_code.sh  data  extractoutput.py  inference.sh  model+data 
 +model 
 +0_import_model.sh 
 +1_quantize_model.sh 
 +2_export_case_code.sh 
 +extractoutput.py 
 +inference.sh
 ``` ```
  
-Use scripts to convert AI models.+Use these scripts to convert Neural Network models into a format compatible with the VIM3 NPU.
  
 ```shell ```shell
-$ cd {workspace}/aml_npu_sdk/acuity-toolkit/demo+$ cd aml_npu_sdk/acuity-toolkit/demo
 $ bash 0_import_model.sh && bash 1_quantize_model.sh && bash 2_export_case_code.sh  $ bash 0_import_model.sh && bash 1_quantize_model.sh && bash 2_export_case_code.sh 
 ``` ```
  
-After the conversion is completed, you can see the converted code in the ''xxxx_nbg_unify'' directory, here is the built-in model as an example.+  * ''0_import_model.sh'' - Import model script. Now, it supports loading Tensorflow, Caffe, Tensorflow Lite, Onnx, Keras, Pytorch, and Darknet models. 
 +  * ''1_quantize_model.sh'' - Quantize model script. Now, it can quantize model in ''int8'', ''int16'' and ''uint8''
 +  * ''2_export_case_code.sh'' - Export model script. If you use ''VIM3'', please set ''optimize'' to ''VIPNANOQI_PID0X88''. If use ''VIM3L'', please set ''optimize'' to ''VIPNANOQI_PID0X99''
 + 
 +After the conversion is completed, you can see the converted code in the ''xxxx_nbg_unify'' directory. Converted model is ''xxxx.nb''. Here is the built-in model as an example.
  
 ```shell ```shell
-$ cd {workspace}/aml_npu_sdk/acuity-toolkit/demo/mobilenet_tf_nbg_unify+$ cd aml_npu_sdk/acuity-toolkit/demo/mobilenet_tf_nbg_unify
 $ ls $ ls
-BUILD   makefile.linux   mobilenettf.vcxproj  vnn_global.h       vnn_mobilenettf.h   vnn_post_process.h  vnn_pre_process.h +BUILD 
-main.c  mobilenet_tf.nb  nbg_meta.json        vnn_mobilenettf.c  vnn_post_process.c  vnn_pre_process.c+makefile.linux 
 +mobilenettf.vcxproj 
 +main.c 
 +mobilenet_tf.nb 
 +nbg_meta.json 
 +vnn_global.h 
 +vnn_mobilenettf.h 
 +vnn_post_process.h 
 +vnn_pre_process.h 
 +vnn_mobilenettf.c 
 +vnn_post_process.c 
 +vnn_pre_process.c
 ``` ```
  
-For the setting of conversion parameters, please refer to ''Model_Transcoding and Running User Guide Eng.pdf'' in ''Docs''.+<WRAP important > 
 +If your model's input is not a three-channel image, please convert input data to ''npy'' format. 
 +</WRAP> 
 + 
 +==== Conversion Parameters ====
  
 +For the conversion parameters and settings, please refer to:
 +  * [[gh>khadas/aml_npu_sdk/tree/master/docs/en]]
 +  * [[gh>khadas/aml_npu_sdk/blob/master/docs/en/Model Transcoding and Running User Guide (1.0).pdf|Model Transcoding and Running User Guide]]
  
Last modified: 2024/04/18 04:34 by nick