[DOCKER] Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (3)

** * This article is for the Ubuntu container (ubuntu_openvino_2020R3.tar) of AE2100. ** </ font>

wrap up

――This article is a setting explanation for OKI AI edge computer "AE2100". --Connect a webcam to the AE2100 to perform real-time object detection.

Introduction

"Can you identify a person with a dog mask? AI edge computer" AE2100 "great experiment" is available on YouTube. Please have a look! !! This time, I would like to run a sample program that detects objects in the same way as the content of this video.

This article, which is the third time in the Ubuntu version, has almost the same content as "Let's run the OpenVINO sample on the OKI AI edge computer" AE2100 "(3)".

environment

The container version of AE2100 is "ubuntu_openvino_2020R3.tar".

What you need this time is a webcam with a USB 2.0 connection. In this article, we used "Logitech HD Webcam C270n" to check the operation.

network-ae2100-camera.png

It is assumed that VcXsrv has been installed on the Windows PC according to the first article of the Ubuntu version. Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (1)

Demos build (development environment)

[See the previous article (Ubuntu version 2nd). ](Https://qiita.com/TWAT/items/e7cd34f8c97f895c39b2#demos%E3%81%AE%E3%83%93%E3%83%AB%E3%83%89%E9%96%8B%E7% 99% BA% E7% 92% B0% E5% A2% 83% E5% 81% B4)

Download model file (development environment)

Activate the Python virtual environment.

  • When a Python virtual environment is created on page 13 of "AE2100 Series SDK Instruction Manual-Deep Learning Edition-" (version: 1.2).
# cd /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader
# source /opt/intel/openvino/bin/setupvars.sh
# source /opt/intel/openvino/deployment_tools/model_optimizer/venv/bin/activate

Download the model file. The model used this time is a model called "ssd_mobilenet_v2" in which MS COCO data (80 classes) is learned.

(venv)# python3 downloader.py --name ssd_mobilenet_v2_coco

IR conversion of the downloaded model file is performed.

(venv)# python3 converter.py --name ssd_mobilenet_v2_coco --precisions FP16

Move the files required for execution to the folder.

# cd 
# mkdir  object_detection_demo 
# cd  object_detection_demo 
# cp /root/omz_demos_build/intel64/Release/object_detection_demo_ssd_async ./
# cp /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/public/ssd_mobilenet_v2_coco/FP16/* ./

Also, prepare a label file to be displayed at the time of recognition. Create a file with the name "ssd_mobilenet_v2_coco.labels" and write the following contents. There are 91 lines, but the number of recognized classes is 80 classes excluding "background" and "no_label".

ssd_mobilenet_v2_coco.labels


background
person
bicycle
car
motorcycle
airplane
bus
train
truck
boat
traffic_light
fire_hydrant
no_label
stop_sign
parking_meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
no_label
backpack
umbrella
no_label
no_label
handbag
tie
suitcase
frisbee
skis
snowboard
sports_ball
kite
baseball_bat
baseball_glove
skateboard
surfboard
tennis_racket
bottle
no_label
wine_glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot_dog
pizza
donut
cake
chair
couch
potted_plant
bed
no_label
dining_table
no_label
no_label
toilet
no_label
tv
laptop
mouse
remote
keyboard
cell_phone
microwave
oven
toaster
sink
refrigerator
no_label
book
clock
vase
scissors
teddy_bear
hair_drier
toothbrush

Check the prepared file.

# ls
object_detection_demo_ssd_async  ssd_mobilenet_v2_coco.mapping
ssd_mobilenet_v2_coco.bin        ssd_mobilenet_v2_coco.xml
ssd_mobilenet_v2_coco.labels     thread.info

Harden to a tar file.

# cd ..
# tar cvf object_detection_demo.tar object_detection_demo

Log in to AE2100 with TeraTerm and transfer the above tar file to AE2100 by dragging and dropping.

Web camera settings (AE2100 side)

Connect your webcam to the USB port on the AE2100. Next, log in to the host OS of AE2100 with TeraTerm and check if the device is recognized with the lsusb command. If you can recognize it, it should come out as "Bus 001 Device 005: ID 046d: 0825 Logitech, Inc. Webcam C270".

root@ae2100:~# lsusb
Bus 001 Device 002: ID 0403:6015 Future Technology Devices International, Ltd Bridge(I2C/SPI/UART/FIFO)
Bus 001 Device 004: ID 0403:6014 Future Technology Devices International, Ltd FT232H Single HS USB-UART/FIFO IC
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 004 Device 003: ID 03e7:f63b Intel
Bus 004 Device 002: ID 03e7:f63b Intel
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 003: ID 2c42:5114
Bus 001 Device 005: ID 046d:0825 Logitech, Inc. Webcam C270
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Create / dev / video0.

root@ae2100:~# mknod /dev/video0 c 81 0
root@ae2100:~# chmod 666 /dev/video0
root@ae2100:~# chown root.video /dev/video0

Start the container. At this time, add "--device = / dev / video0: / dev / video0" to the argument so that the webcam can be used from the container.

root@ae2100:~# docker run --device /dev/dri --device=/dev/video0:/dev/video0 --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --name ubuntu-openvino -d ubuntu:openvino_2020R3 /sbin/init

File copy (AE2100 side)

Copy the file from the host to the container.

root@ae2100:~# docker cp object_detection_demo.tar ubuntu-openvino:/root/

Enter the container.

root@ae2100:~# docker exec -it ubuntu-openvino /bin/bash 

Extract the tar.

# cd
# tar xvf  object_detection_demo.tar

Installation of dependent packages (AE2100 side)

[See the previous article (Ubuntu version 2nd). ](Https://qiita.com/TWAT/items/e7cd34f8c97f895c39b2#%E4%BE%9D%E5%AD%98%E3%83%91%E3%83%83%E3%82%B1%E3%83 % BC% E3% 82% B8% E3% 81% AE% E3% 82% A4% E3% 83% B3% E3% 82% B9% E3% 83% 88% E3% 83% BC% E3% 83% ABae2100 % E5% 81% B4)

Execution of object_detection_demo_ssd_async (AE2100 side)

Set the OpenVINO environment variables.

# source /opt/intel/openvino/bin/setupvars.sh

Specify the IP address of the Windows PC that is the window display destination.

# export DISPLAY=192.168.100.101:0.0

Now that you are ready, try running "object_detection_demo_ssd_async".

  • Please start Xlaunch on the Windows PC side in advance according to the first article.
# cd /root/object_detection_demos
# ./object_detection_demo_ssd_async -i cam -m ssd_mobilenet_v2_coco.xml -d HDDL

When the stuffed animal was projected, it was classified as "teddy_bear".

teddy-bear.png

When I reflected the vase I had, it was classified as "vase". You're right!

vase.png

If you press the "Tab" key on the window, asynchronous processing (Async) and synchronous processing (Sync) will be switched.

For asynchronous processing (Async), the execution frame rate is improved.

Press the "Esc" key to finish the process.

Summary

This time, I connected a webcam to the AE2100 and tried real-time object detection on the Ubuntu version.

Since AE2100 / OpenVINO includes various sample applications Please, try it!

-OKI AI Let's run the OpenVINO sample program on the edge computer "AE2100" Ubuntu container version (1) -OKI AI Let's run the OpenVINO sample program on the edge computer "AE2100" Ubuntu container version (2)

Recommended Posts

Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (1)
Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (3)
Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (2)
Let's capture the image of the IP camera to the OKI AI edge computer "AE2100"
Install Ubuntu20.04 on RaspberryPi 4 and build Kubernetes to run the container
Run Edge (Chromium version) on Mac with Selenium
Install the latest version of Jenkins on Ubuntu 16
How to run NullpoMino 7.5.0 on Ubuntu 20.04.1 64bit version