This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
products:sbc:vim4:npu:demos:densenet [2024/01/04 04:59] louis |
products:sbc:vim4:npu:demos:densenet [2025/01/08 22:22] (current) louis |
||
---|---|---|---|
Line 1: | Line 1: | ||
~~tag> NPU Densenet VIM4 ONNX~~ | ~~tag> NPU Densenet VIM4 ONNX~~ | ||
- | ====== DenseNet CTC ONNX Keras VIM4 Demo - 3====== | + | |
+ | **Doc for version ddk-3.4.7.7** | ||
+ | |||
+ | ====== DenseNet CTC ONNX Keras VIM4 Demo - 3 ====== | ||
{{indexmenu_n> | {{indexmenu_n> | ||
Line 17: | Line 20: | ||
Follow Docker official documentation to install Docker: [[https:// | Follow Docker official documentation to install Docker: [[https:// | ||
- | Then fetch the prebuilt NPU Docker | + | Follow |
```shell | ```shell | ||
- | $ docker pull yanwyb/npu:v1 | + | docker pull numbqq/npu-vim4 |
- | $ docker run -it --name | + | |
- | -v / | + | |
- | -v / | + | |
- | yanwyb/ | + | |
``` | ``` | ||
==== Get the conversion tool ==== | ==== Get the conversion tool ==== | ||
- | Download The conversion tool from [[gl> | + | Download The conversion tool from [[gh> |
```shell | ```shell | ||
- | $ git clone https://gitlab.com/ | + | $ git clone https://github.com/ |
+ | $ cd vim4_npu_sdk | ||
+ | $ git lfs pull | ||
+ | $ ls | ||
+ | adla-toolkit-binary | ||
``` | ``` | ||
+ | |||
+ | * '' | ||
+ | * '' | ||
+ | * '' | ||
+ | |||
+ | <WRAP important> | ||
+ | If your kernel is older than 241129, please use branch npu-ddk-1.7.5.5 | ||
+ | </ | ||
+ | |||
+ | ==== Convert ==== | ||
After training the model, run the scripts as follows to modify net input and output and convert the model to ONNX. | After training the model, run the scripts as follows to modify net input and output and convert the model to ONNX. | ||
Line 60: | Line 73: | ||
onnx_model.graph.node[0].input[0] = " | onnx_model.graph.node[0].input[0] = " | ||
onnx.save_model(onnx_model, | onnx.save_model(onnx_model, | ||
- | ``` | ||
- | |||
- | The model input is grayscale image, so before quantification, | ||
- | |||
- | ```python convert_picture.py | ||
- | import numpy as np | ||
- | import cv2 | ||
- | import os | ||
- | |||
- | image_path = " | ||
- | save_path = " | ||
- | |||
- | for i in os.listdir(image_path): | ||
- | image = cv2.imread(image_path + i, 0) | ||
- | image = image / 255.0 | ||
- | image = np.expand_dims(image, | ||
- | np.save(save_path + i.split(" | ||
``` | ``` | ||
Line 98: | Line 94: | ||
--input-shapes | --input-shapes | ||
--dtypes " | --dtypes " | ||
+ | --inference-input-type float32 \ | ||
+ | --inference-output-type float32 \ | ||
--quantize-dtype int8 --outdir onnx_output | --quantize-dtype int8 --outdir onnx_output | ||
--channel-mean-value " | --channel-mean-value " | ||
Line 103: | Line 101: | ||
--iterations 500 \ | --iterations 500 \ | ||
--disable-per-channel False \ | --disable-per-channel False \ | ||
- | --inference-input-type float32 \ | ||
- | --inference-output-type float32 \ | ||
--batch-size 1 --target-platform PRODUCT_PID0XA003 | --batch-size 1 --target-platform PRODUCT_PID0XA003 | ||
``` | ``` | ||
- | |||
- | <WRAP important > | ||
- | Please prepare about 500 pictures for quantification. If the pictures size is smaller than model input size, please resize pictures to input size before quantification. | ||
- | </ | ||
Run '' | Run '' | ||
Line 128: | Line 120: | ||
``` | ``` | ||
- | <WRAP important > | + | <WRAP important> |
- | If your kernel | + | If your kernel is older than 241129, please use version before |
</ | </ | ||
Line 143: | Line 135: | ||
=== Picture input demo === | === Picture input demo === | ||
- | Put '' | + | Put '' |
```shell | ```shell | ||
Line 154: | Line 146: | ||
# Run | # Run | ||
- | $ sudo ./ | + | $ ./ |
``` | ``` | ||
- | <WRAP tip > | + | {{: |
+ | |||
+ | {{: | ||
+ | |||
+ | <WRAP tip> | ||
If your '' | If your '' | ||
</ | </ | ||