This article summarizes the installation procedure of Raspberry Pi 4 and Coral USB Accelerator as a method of machine learning with edge devices.
** Raspberry Pi 4 ** is the latest generation of current models. The CPU and memory performance has been greatly improved, and it also supports USB3.0.
** Coral USB Accelerator ** is Google's dedicated ASIC developed by Google for machine learning on edge devices.
This dedicated ASIC is called the Edge TPU and can take advantage of TensorFlow Lite's flakework, which specializes in making inferences at the edge. In addition, we do not support other than the flake work of TensorFlow Lite. In addition, a USB 3.0 connection is required to maximize the performance of the Coral USB Accelerator. In summary, the Raspberry Pi 4 is a great way to get the most out of your Coral USB Accelerator.
Raspberry Pi4 Although not officially supported, Google has provided a pre-compiled Raspberry Pi image called ** Edge TPU Platforms ** for using Edge TPU.
It has a built-in trained image classification model, etc., which will be useful for machine learning on the Raspberry Pi.
I will explain the procedure to use the image of Raspberry Pi 4 from EdgeTPU Platforms.
Please note that the interface of the Raspberry Pi 4 has been changed to ** USB Type-C ** for the power supply and ** Micro HDMI ** for HDMI.
EdgeTPU Platforms
** If you boot with the downloaded image, you need to extend the file system to fit the size of the SD card. ** **
The system area (/) before file system expansion is in the following state.
pi@raspberrypi:~ $ df -h
File system size used Remaining used%Mount position
/dev/root 3.3G 3.0G 100M 97% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.6M 1.9G 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mmcblk0p1 253M 41M 213M 16% /boot
tmpfs 386M 0 386M 0% /run/user/1000
Execute the following command to start raspi-config
and operate interactively.
$ sudo raspi-config
First, select ** "7 Advanced Options" **.
Then select A1 Expand Filesystem.
Click OK.
Execute the following command to restart.
$ sudo systemctl reboot
After rebooting, you can see that the system area (/) has been expanded.
pi@raspberrypi:~ $ df -h
File system size used Remaining used%Mount position
/dev/root 29G 3.1G 24G 12% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.6M 1.9G 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mmcblk0p1 253M 41M 213M 16% /boot
tmpfs 386M 0 386M 0% /run/user/1000
Coral USB Accelerator To use Coral USB Accelerator, you need to do the following:
-[x] Edge TPU runtime installation -[x] Install TensorFlow Lite Library
To communicate with the Edge TPU, install the Edge TPU runtime. In addition, a library etc. is required to use TensorFlow Lite with Python. In this article, we will install a simple tflite_runtime library to use the TensorFlow Lite model in Python.
Open the box containing the Coral USB Accelerator.
Connect the Coral USB Accelerator to your Raspberry Pi 4. USB3.0 has a ** blue </ font> ** interface.
--Add repository
$ echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo apt-get update
--Installation
$ sudo apt-get install libedgetpu1-std
--Download library
$ wget https://dl.google.com/coral/python/tflite_runtime-1.14.0-cp37-cp37m-linux_armv7l.whl
--Installing the library
$ pip3 install tflite_runtime-1.14.0-cp37-cp37m-linux_armv7l.whl
Image classification with reference to Get started with the USB Accelerator Let's run the machine.
$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
18.6ms
4.6ms
4.6ms
4.6ms
4.6ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76562
I used TensorFlow Lite to perform inference on the Edge TPU. By the way, the output result when executed with USB 2.0 is as follows. You can see that it is slower than USB3.0.
INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
120.5ms
11.5ms
11.5ms
11.7ms
11.6ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76562
Now you can machine learn on edge devices.
Next, create a model with TensorFlow and convert the model to TensorFlow Lite.
Recommended Posts