This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
products:sbc:vim4:npu:demos:vgg16 [2024/01/04 05:01] louis |
products:sbc:vim4:npu:demos:vgg16 [2025/06/11 21:50] (current) louis |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ~~tag> | ~~tag> | ||
| + | |||
| + | **Doc for version ddk-3.4.7.7** | ||
| ====== VGG16 TensorFlow Keras VIM4 Demo 4 ====== | ====== VGG16 TensorFlow Keras VIM4 Demo 4 ====== | ||
| {{indexmenu_n> | {{indexmenu_n> | ||
| + | |||
| + | ===== Introduction ===== | ||
| + | |||
| + | VGG16 is a classification model. It can assign a single label to an entire image. | ||
| + | |||
| + | Image and inference results on VIM4. | ||
| + | |||
| + | {{: | ||
| + | |||
| + | {{: | ||
| [[https:// | [[https:// | ||
| Line 22: | Line 34: | ||
| Follow Docker official documentation to install Docker: [[https:// | Follow Docker official documentation to install Docker: [[https:// | ||
| - | Then fetch the prebuilt NPU Docker | + | Follow |
| ```shell | ```shell | ||
| - | $ docker pull yanwyb/npu:v1 | + | docker pull numbqq/npu-vim4 |
| - | $ docker run -it --name | + | |
| - | -v / | + | |
| - | -v / | + | |
| - | yanwyb/ | + | |
| ``` | ``` | ||
| ==== Get convert tool ==== | ==== Get convert tool ==== | ||
| - | Download Tool from [[gl> | + | Download Tool from [[gh> |
| ```shell | ```shell | ||
| - | $ git clone https://gitlab.com/ | + | $ git lfs install |
| + | $ git lfs clone https://github.com/ | ||
| + | $ cd vim4_npu_sdk | ||
| + | $ ls | ||
| + | adla-toolkit-binary | ||
| ``` | ``` | ||
| + | |||
| + | * '' | ||
| + | * '' | ||
| + | * '' | ||
| + | |||
| + | <WRAP important> | ||
| + | If your kernel is older than 241129, please use branch npu-ddk-1.7.5.5. | ||
| + | </ | ||
| ==== Convert ==== | ==== Convert ==== | ||
| Line 69: | Line 89: | ||
| --outputs dense_2/ | --outputs dense_2/ | ||
| --inference-input-type float32 \ | --inference-input-type float32 \ | ||
| - | --inference-output-type float32 \ | + | --inference-output-type float32 \ |
| --quantize-dtype int8 --outdir tensorflow_output \ | --quantize-dtype int8 --outdir tensorflow_output \ | ||
| --channel-mean-value " | --channel-mean-value " | ||
| + | --inference-input-type " | ||
| + | --inference-output-type " | ||
| --source-file vgg16_dataset.txt \ | --source-file vgg16_dataset.txt \ | ||
| --iterations 500 \ | --iterations 500 \ | ||
| Line 77: | Line 99: | ||
| --target-platform PRODUCT_PID0XA003 | --target-platform PRODUCT_PID0XA003 | ||
| ``` | ``` | ||
| - | |||
| - | <WRAP important > | ||
| - | Please prepare about 500 pictures for quantification. If the pictures size is smaller than model input size, please resize pictures to input size before quantification. | ||
| - | </ | ||
| Run '' | Run '' | ||
| Line 98: | Line 116: | ||
| ``` | ``` | ||
| - | <WRAP important > | + | <WRAP important> |
| - | If your kernel | + | If your kernel is older than 241129, please use version before |
| </ | </ | ||
| Line 124: | Line 142: | ||
| # Run | # Run | ||
| - | $ sudo ./vgg16 -m ../ | + | $ ./vgg16 -m ../ |
| ``` | ``` | ||