Khadas Edges and VIMs are powered by ARM Cortex-A CPUs and Mali GPUs.
We can make use of the available computation power for running TFLite models using ArmNN Libraries by leveraging TFLite interpreter Delegate.
The Delegate is provided by ARM as arm-software/armnn and the provided examples are made to make use of it.
For Ubuntu 22.04 only.
This library depends on OpenCL, make sure you are using the following platforms to ensure the driver is present.
board | Linux Kernel (BSP) | OS |
---|---|---|
VIM3 | 4.9 5.15 | Ubuntu 22.04 |
VIM3L | 4.9 5.15 | Ubuntu 22.04 |
VIM4 | 5.4 5.15 | Ubuntu 22.04 |
Edge2 | 5.10 | Ubuntu 22.04 |
You can refer to the OpenCL doc for more info.
Clone the examples sravansenthiln1/armnn_tflite
$ git clone https://github.com/sravansenthiln1/armnn_tflite $ cd armnn_tflite
$ sudo apt-get install python3-pip
$ pip3 install numpy pillow
$ pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime
$ wget -O ArmNN-aarch64.tgz https://github.com/ARM-software/armnn/releases/download/v23.08/ArmNN-linux-aarch64.tar.gz $ mkdir libs $ tar -xvf ArmNN-aarch64.tgz -C libs
Taking the Mobilenet v1 as example.
$ cd mobilenet_v1
$ sudo ln ../libs/libarmnnDelegate.so.29.0 libarmnnDelegate.so.29 $ sudo ln ../libs/libarmnn.so.33.0 libarmnn.so.33
$ python3 run_inference.py
You can modify whether you want to use the CPU or GPU to accelerate the inference.
Change the BACKEND
variable in the code to use either GpuAcc
- GPU or CpuAcc
- CPU backends.