[DOCKER] Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (2)

** * This article is for the Ubuntu container (ubuntu_openvino_2020R3.tar) of AE2100. ** </ font>

wrap up

――This article explains how to set up the OKI AI edge computer "AE2100". --Run a demo program that estimates the posture of a person on the AE2100.

Introduction

OpenVINO comes with Samples and Demos application sources. Of these, Demos includes applications that are examples of using inference engines for various use cases (eg, person posture estimation, object detection, face detection, line-of-sight estimation, etc.). List of Inference Engine Demos

This time, we will build Demos and execute a program (Human Pose Optimization C ++ Demo) that estimates the posture of a person on the AE2100. Details of the "Human Pose Optimization" demo

The main change from "Let's run the OpenVINO sample on the OKI AI edge computer" AE2100 "(2)" in this article, which is the second time on the Ubuntu version, is how to build Demos. (In OpenVINO2020, you no longer need to edit build_demos.sh or ibcpu_extension.so)

Also, human_pose_estimation_demo now supports asynchronous execution, so we are comparing the behavior of synchronous execution and asynchronous execution.

environment

The container version of AE2100 is "ubuntu_openvino_2020R3.tar". Build Demos in the development environment and run it on the AE2100.

For the construction of the development environment, refer to "AE2100 Series SDK Instruction Manual-Deep Learning Edition-" (version: 1.2) P.10.

Please prepare 4GB or more of memory for the development environment. Please note that if the memory is low, the build may stop in the middle.

It is assumed that VcXsrv has been installed on the Windows PC according to the first article. Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (1)

Build Demos (development environment side)

Build Demos that comes with OpenVINO in the development environment.

If you installed OpenVINO with the default settings, Demos will be in the following location. /opt/intel/openvino/inference_engine/demos

Set the OpenVINO environment variables.

# source /opt/intel/openvino/bin/setupvars.sh

Execute build_demos.sh to build.

# cd /opt/intel/openvino/inference_engine/demos
# ./build_demos.sh

When you build, the executable file will be output to the following directory. /root/omz_demos_build/intel64/Release

Download model file (development environment side)

Here, the model file is downloaded using the model_downloader that comes with OpenVINO.

  • You can also download the model file directly from the following site with a browser. https://download.01.org/opencv/2020/openvinotoolkit/2020.3/open_model_zoo/models_bin/1/human-pose-estimation-0001/FP16/

Change to the directory where model_downloader is located.

# cd /opt/intel/openvino/deployment_tools/tools/model_downloader

Activate the python virtual environment.

  • When a Python virtual environment is created on page 13 of "AE2100 Series SDK Instruction Manual-Deep Learning Edition-" (version: 1.2).
# source /opt/intel/openvino/deployment_tools/model_optimizer/venv/bin/activate

Install the python package required for model_downloader to work.

(venv)# pip3 install -r requirements.in

Set the OpenVINO environment variables.

(venv)# source /opt/intel/openvino/bin/setupvars.sh

Let's output a list of model files that can be obtained with model_downloader.

(venv)# python3 downloader.py --print_all
action-recognition-0001-decoder
action-recognition-0001-encoder
age-gender-recognition-retail-0013
driver-action-recognition-adas-0002-decoder
driver-action-recognition-adas-0002-encoder
emotions-recognition-retail-0003
face-detection-adas-0001
face-detection-adas-binary-0001
face-detection-retail-0004
face-detection-retail-0005
face-reidentification-retail-0095
(Omitted below)

This time, I will download "human-pose-estimation-0001" which is a model for posture estimation.

(venv)# python3 downloader.py --name human-pose-estimation-0001  --precisions FP16
(venv)# deactivate

The model file is output to the following location. ./intel/human-pose-estimation-0001

File copy to AE2100

Here, move the executable file and model file prepared in the development environment to the AE2100.

First, move the executable file and model built on the development environment side to the folder.

# cd
# mkdir human_pose
# cd human_pose
# cp /root/omz_demos_build/intel64/Release/human_pose_estimation_demo ./
# cp /opt/intel/openvino/deployment_tools/tools/model_downloader/intel/human-pose-estimation-0001/FP16/* ./

Next, download the video file for which the posture is estimated.

# wget https://github.com/intel-iot-devkit/sample-videos/raw/master/one-by-one-person-detection.mp4

Check the prepared file.

# ls
human-pose-estimation-0001.bin  human-pose-estimation-0001.xml  human_pose_estimation_demo  one-by-one-person-detection.mp4

Create a tar archive.

# cd ..
# tar cvf human_pose.tar  human_pose

Log in to AE2100 with TeraTerm and transfer the above tar file created on the development environment side to AE2100 by dragging and dropping.

After transferring the tar file to the AE2100, copy the tar file into the container.

root@ae2100:~# docker cp human_pose.tar ubuntu-openvino:/root/

Enter the container with the following command. (If the container is not started, start it referring to "AE2100 Series SDK Instruction Manual-Deep Learning Edition-" (version: 1.2) P.20.)

root@ae2100:~# docker exec -it ubuntu-openvino /bin/bash

Extract the tar file inside the container.

# cd
# tar xvf human_pose.tar

Installation of dependent packages (AE2100 side)

  • If you have already done so, please skip it.

Since ffmpeg and GTK + are required to execute the demo, if you have not installed the dependent packages, install them. You need an internet connection.

# cd /opt/intel/openvino/install_dependencies
# apt-get clean
# ./install_openvino_dependencies.sh

Execution of Human Pose Optimization (AE2100 side)

Finally, run the demo program "Human Pose Optimization" on the AE2100.

Set the OpenVINO environment variables in the execution environment container.

# source /opt/intel/openvino/bin/setupvars.sh

Since it is necessary to display the window, please start Xlaunch on the Windows PC side according to the previous article. Specify the IP address of the Windows PC that is the window display destination.

# export DISPLAY=192.168.100.101:0.0

Now that we are ready, we would like to run "Human Pose Optimization".

In "Let's run the OpenVINO sample program on the OKI AI edge computer" AE2100 "(2)", I compared the operation on the GPU and HDDL (Myriad X 2 chip), but this time it is synchronized with HDDL. Let's compare processing and asynchronous processing.

In synchronous processing, images are processed frame by frame, but in asynchronous processing, multiple frames are processed in parallel, so execution by asynchronous processing is expected to be faster. In the asynchronous processing in this demo program, 2 frames are processed in parallel.

You can switch between synchronous processing and asynchronous processing by pressing the Tab key on the execution window. You can also press the Esc key to exit.

Change to the execution directory.

# cd 
# cd human_pose

The following command estimates the posture of the video downloaded earlier, and the result is Win. It will be displayed on the dough.

# ./human_pose_estimation_demo -i one-by-one-person-detection.mp4 -m human-pose-estimation-0001.xml -d HDDL

The figure below is the execution result window. If synchronization is in progress, SYNC will be displayed on the screen. Throughput is about 4 fps.

images_article-2_hddl.png

Then press the Tab key on the window to switch to asynchronous processing. If asynchronous processing is being performed, ASYNC will be displayed on the screen. Throughput for asynchronous processing is about 11 fps.

images_article-2_hddl-async.png

It was confirmed that asynchronous processing is faster than synchronous processing. After confirming the execution result, press the Esc key to close the window.

I would like to take an opportunity to introduce how to implement asynchronous processing in the future.

Summary

This time, I built Domos and ran it on AE2100. Next time, I will try to detect an object by connecting a webcam to the AE2100.

-OKI AI Let's run the OpenVINO sample program on the edge computer "AE2100" Ubuntu container version (1) -OKI AI Let's run the OpenVINO sample program on the edge computer "AE2100" Ubuntu container version (3)

Recommended Posts

Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (1)
Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (3)
Let's run the OpenVINO sample program on the OKI AI edge computer "AE2100" Ubuntu container version (2)
Let's capture the image of the IP camera to the OKI AI edge computer "AE2100"
Install Ubuntu20.04 on RaspberryPi 4 and build Kubernetes to run the container
Run NordVPN on Docker (Windows) Ubuntu container
Run Edge (Chromium version) on Mac with Selenium
How to run NullpoMino 7.5.0 on Ubuntu 20.04.1 64bit version