~~tag>NPU VGG16 VIM4 Tensorflow Keras ~~
**Doc for version ddk-3.4.7.7**
====== VGG16 TensorFlow Keras VIM4 Demo 4 ======
{{indexmenu_n>4}}
[[https://www.google.com/search?q=VGG16|VGG16]] is a convolution neural net architecture that’s used for image recognition. It utilizes 16 layers with weights and is considered one of the best vision model architectures to date.
===== Get Source Code =====
[[gh>Daipuwei/Mini-VGG-CIFAR10]]
```shell
$ git clone https://github.com/Daipuwei/Mini-VGG-CIFAR10
```
===== Convert Model =====
==== Build virtual environment ====
Follow Docker official documentation to install Docker: [[https://docs.docker.com/engine/install/ubuntu/|Install Docker Engine on Ubuntu]].
Follow the script below to get Docker image:
```shell
docker pull numbqq/npu-vim4
```
==== Get convert tool ====
Download Tool from [[gl>khadas/vim4_npu_sdk]].
```shell
$ git lfs install
$ git lfs clone https://gitlab.com/khadas/vim4_npu_sdk.git
$ cd vim4_npu_sdk
$ ls
adla-toolkit-binary adla-toolkit-binary-1.2.0.9 convert-in-docker.sh Dockerfile docs README.md
```
* ''adla-toolkit-binary/docs'' - SDK documentations
* ''adla-toolkit-binary/bin'' - SDK tools required for model conversion
* ''adla-toolkit-binary/demo'' - Conversion examples
If your kernel is older than 241129, please use version before tag ddk-3.4.7.7.
==== Convert ====
We first need to convert the Keras model(''.h5'') into a TensorFlow model (''.pb''). We use this tool to convert [[gh>amir-abdi/keras_to_tensorflow]]
```shell
$ git clone https://github.com/amir-abdi/keras_to_tensorflow
```
Then we need to convert the TensorFlow model to an ADLA model (''.adla'')
Enter ''vim4_npu_sdk/demo'' and overwrite ''convert_adla.sh'' as follows.
```sh convert_adla.sh
#!/bin/bash
ACUITY_PATH=../bin/
#ACUITY_PATH=../python/tvm/
adla_convert=${ACUITY_PATH}adla_convert
if [ ! -e "$adla_convert" ]; then
adla_convert=${ACUITY_PATH}adla_convert.py
fi
$adla_convert --model-type tensorflow \
--model ./model_source/vgg16/vgg16.pb \
--inputs image_input --input-shapes 32,32,3 \
--outputs dense_2/Softmax \
--inference-input-type float32 \
--inference-output-type float32 \
--quantize-dtype int8 --outdir tensorflow_output \
--channel-mean-value "0,0,0,255" \
--inference-input-type "float32" \
--inference-output-type "float32" \
--source-file vgg16_dataset.txt \
--iterations 500 \
--batch-size 1 \
--target-platform PRODUCT_PID0XA003
```
Run ''convert_adla.sh'' to generate the VIM4 model. The converted model is ''xxx.adla'' in ''tensorflow_output''.
```shell
$ bash convert_adla.sh
```
===== Run NPU =====
==== Get source code ====
Clone the source code from our [[gh>khadas/vim4_npu_applications]].
```shell
$ git clone https://github.com/khadas/vim4_npu_applications
```
If your kernel is older than 241129, please use version before tag ddk-3.4.7.7.
==== Install dependencies ====
```shell
$ sudo apt update
$ sudo apt install libopencv-dev python3-opencv cmake
```
==== Compile and run ====
=== Picture input demo ===
Put ''vgg16_int8.adla'' in ''vim4_npu_applications/vgg16/data/''.
```shell
# Compile
$ cd vim4_npu_applications/vgg16
$ mkdir build
$ cd build
$ cmake ..
$ make
# Run
$ ./vgg16 -m ../data/vgg16_int8.adla -p ../data/airplane.jpeg
```
{{:products:sbc:vim4:npu:demos:airplane.webp?400|}}
{{:products:sbc:vim4:npu:demos:vgg16-demo-output.webp?400|}}
If your **VGG16** model classes are not the same as **CIFAR10**, please change ''data/vgg16_class.txt'' and the ''OBJ_CLASS_NUM'' in ''include/postprocess.h''.