====== NPU TFLite Delegate ====== Khadas VIM3/3L are powered by **VeriSilicon Vivante** NPU. We can make use of the available computation power for running TFLite models directly using TIM-VX Libraries by leveraging [[https://www.tensorflow.org/lite/performance/delegates | TFLite interpreter Delegate]]. The Delegate is provided by VeriSilicon as [[gh>VeriSilicon/tflite-vx-delegate]] and the provided examples are made to make use of it. This library depends Galcore and OpenVX drivers, make sure you are using the following platforms to ensure the driver is present. ^ board ^ Linux Kernel (BSP) ^ OS ^ | VIM3| 4.9 \\ 5.15 | Ubuntu 22.04 \\ Ubuntu 20.04| | VIM3L| 4.9 \\ 5.15 | Ubuntu 22.04 \\ Ubuntu 20.04| ===== Get source code ===== Clone the examples [[gh>sravansenthiln1/vx_tflite]] ```shell $ git clone https://github.com/sravansenthiln1/vx_tflite $ cd vx_tflite ``` ===== Setup the environment ===== ==== Install pip ==== ```shell $ sudo apt-get install python3-pip ``` ==== Install necessary python packages ==== ```shell $ pip3 install numpy pillow ``` ==== Install the TFLite runtime interpreter ==== ```shell $ pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime ``` ==== Create Library symlink ==== ```shell $ sudo ln /usr/lib/libOpenVX.so /usr/lib/libOpenVX.so.1 ``` ==== Copy the necessary library files ==== If you are running on ''VIM3'': ```shell $ sudo cp libs/VIM3/libtim-vx.so /usr/lib/aarch64-linux-gnu/ ``` If you are running on ''VIM3L'': ```shell $ sudo cp libs/VIM3L/libtim-vx.so /usr/lib/ ``` ===== Run Examples ===== Taking the Mobilenet v1 as example. ==== Enter the example directory ==== ```shell $ cd mobilenet_v1 ``` ==== Create library symlinks ==== ```shell $ sudo ln ../libs/libvx_delegate.so libvx_delegate.so ``` ==== Run the example ==== ```shell $ python3 run_npu_inference.py ``` If you would like to analyse the performance of your model through the examples, please follow the [[../npu/npu-performance]] guide.