You can see information on various APIs provided by NAOqi Framework by opening [Help]-[Reference API] of Choregraphe. In this article, I will explain how to use the API using ** ALPeoplePerception API ** and ** ALFaceCharacteristics API ** as examples.
In addition, ** ALPeoplePerception API and ALFaceCharacteristics API do not have a means to check the operation with virtual robots, and an actual Pepper machine is required. ** I would like you to experiment with the actual Pepper machine at Aldebaran Atelier Akihabara. (Reservation URL: http://pepper.doorkeeper.jp/events)
ALPeoplePerception API
** People Perception ** API is also available in Pepper Tutorial (6): Touch Sensor, Human Recognition As you can see, it's an API that provides the ability to detect and identify people around Pepper.
When a person is detected by Pepper's sensor, a temporary identification ID is set and that information is stored in an area called ʻALMemory. The application can get the event by ʻALPeoplePerception
via ʻALMemory and do something, or use the information stored by ʻALPeoplePerception
in ʻALMemory`.
This API provides information related to human recognition, such as:
ALFaceCharacteristics API
** Face Characteristics ** API is an API that analyzes facial information for each person recognized by ʻALPeoplePerception` and obtains additional information such as age, gender, and laughing.
ʻALFaceCharacteristics` provides the following information inferred from facial features:
Each of these values is also provided with a value between 0 and 1 to indicate the degree of confidence. In addition, the following values are provided.
First, when using these APIs, we will organize how to access the APIs. There are two main ways to use the various functions provided by the API.
Here, we will give an overview of each method.
Many of the APIs, such as ʻALPeoplePerception and ʻALFaceCharacteristics
, are provided in the form of ** modules **.
Modules are provided by Python classes, and methods provided by the API can be accessed via the ʻALProxy class. For example, to call the ʻanalyzeFaceCharacteristics
method provided by ʻALFaceCharacteristics`, write the following Python script.
faceChar = ALProxy("ALFaceCharacteristics")
faceChar.analyzeFaceCharacteristics(peopleId)
When executing a Python script in Pepper like Python Box, specify a string indicating the module name in ʻALProxy as an argument. This will give you a ʻALProxy
instance for accessing the module. This allows the application to call the methods provided by the API.
When I introduced Memory Event, I introduced the event processing by key, but the information collected by various APIs of Pepper is It is once aggregated in a mechanism called ʻALMemory. The application can perform the following operations on ʻALMemory
.
There are the following ways to use ʻALMemory`.
At the far left of the flow diagram, you can create an input that responds to memory events.
For details on how to operate, refer to People approaching.
By using the box in Memory of the advanced box library, it is possible to acquire and set values, monitor memory events, etc. by combining boxes.
In this article, I will show you how to use the box by taking ** Get a list of people ** as an example.
Since ʻALMemory is also implemented as the module described above, it can be accessed via ʻALProxy
as follows.
memory = ALProxy("ALMemory")
ageData = memory.getData("PeoplePerception/Person/%d/AgeProperties" % peopleId)
In this article, we will use ** Age Estimate ** as an example to get the value.
The projects introduced here are available at https://github.com/Atelier-Akihabara/pepper-face-characteristics-example.
You can use the sample project by getting the file from GitHub https://github.com/Atelier-Akihabara/pepper-face-characteristics-example. There are several ways to get it, but one of the easiest ways is to get the archive from the Download ZIP link.
The resulting file has a folder showing two projects, ** visible-people-list ** and ** get-age **. There is a file with the extension .pml
in this folder, so if you double-click it to open it, Choregraphe will start and you will be able to use the project.
I will explain the points for each sample project.
First, let's check the behavior when Pepper recognizes a person by using the PeoplePerception / VisiblePeopleList
event to ** get a list of people who are currently visible from Pepper **. I will.
The sample project is ** visible-people-list **. Of the files obtained from GitHub, you can open them by double-clicking on visible-people-list.pml
in the visible-people-list
folder.
It is created by the following procedure.
Click the ** Add Memory Event [+] button on the left side of the flow diagram **, enter ** PeoplePerception [A] ** in the filter, and check ** PeoplePerception / VisiblePeopleList [B] **.
Place the following boxes on the flow diagram
Set the connections and parameters for each box as follows:
Run this project and run the Log Viewer (http://qiita.com/Atelier-Akihabara/items/7a898f5e4d878b1ad889#-%E5%8F%82%E8%80%83%E3%82%A8%E3%83 % A9% E3% 83% BC% E3% 81% AE% E8% A1% A8% E7% A4% BA% E3% 81% A8% E3% 83% AD% E3% 82% B0% E3% 83% 93 Please check the message that appears in% E3% 83% A5% E3% 83% BC% E3% 82% A2). As a person moves around Pepper, you should see a log similar to the one below.
[INFO ] behavior.box :onInput_message:27 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1790002616__root__AgeDetection_5__Log_1: Get: []
[INFO ] behavior.box :onInput_message:27 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1790002616__root__AgeDetection_5__Log_2: Raised: [179318]
You can see that the PeoplePerception / VisiblePeopleList
event provides a list of identifiers of people who are visible to Pepper, in the form of 179318
.
Next, let's use ʻALFaceCharacteristics` to get the information of the human face indicated by these identifiers.
Next, let's get the value PeoplePerception / Person / <ID> / AgeProperties
set by the analysis process of ʻALFaceCharacteristics`. Here, as an example, let's try to estimate the age of the person that Pepper finds and say "Are you about?" **.
In the previous example, I got the IDs of all the people that Pepper found, but here, Basic Awareness I will use the function of. We'll use the Trackers> Basic Awarenss box in the standard box library to track people and get the ID of the tracked person from the HumanTracked
output.
The sample project is ** get-age **. Of the files obtained from GitHub, you can open them by double-clicking get-age.pml
in the get-age
folder.
It is created by the following procedure.
Create a Get Age box as an empty Python box This time, the input / output configuration is as follows. Please refer to Python Box Concept for how to create it.
Double-click the Get Age box to open the script editor and write a Python script like the one below.
class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self)
def onLoad(self):
self.memory = ALProxy("ALMemory")
self.faceChar = ALProxy("ALFaceCharacteristics")
def onUnload(self):
pass
def onInput_onPeopleDetected(self, peopleId):
if peopleId < 0:
return
r = self.faceChar.analyzeFaceCharacteristics(peopleId)
if not r:
self.onUnknown()
return
ageData = self.memory.getData("PeoplePerception/Person/%d/AgeProperties" % peopleId)
self.logger.info("Age Properties: %d => %s" % (peopleId, ageData))
if ageData and len(ageData) == 2:
self.onAge(ageData[0])
else:
self.onUnknown()
This code calls the ʻanalyzeFaceCharacteristics method of ʻALFaceCharacteristics
for the ID of the person given from the ʻonPeopleDetectedinput, and if this call is successful, from ʻALMemory`` PeoplePerception / Person / <ID> / AgeProperties. It gets the value of
and calls the output of ʻonAge` with the value indicating the age of this value as an argument.
Change the Type of the Say Text box to Number and customize the Say Text box (http://qiita.com/Atelier-Akihabara/items/8df3e81d286e2e15d9b6#%E8%A3%9C%E8%B6% B3say-text% E3% 83% 9C% E3% 83% 83% E3% 82% AF% E3% 82% B9% E3% 81% AE% E3% 82% AB% E3% 82% B9% E3% 82% BF% E3% 83% 9E% E3% 82% A4% E3% 82% BA) to change what you say
sentence += "you are%About d years old" % int(p)
Connect the boxes placed in 1. and 3. as shown below.
Change the contents of the Say box connected to the ʻonUnknown` output to something like "I don't know."
When you run this application, Basic Awareness will be launched and Pepper will turn your face in response to your surroundings. When you find a person, the person's ID is entered in the Get Age box and they say, "Are you about?" Even if you find a person, if the face is difficult to recognize, the recognition process will fail and you will say "I don't know."
In this way, it is possible to realize an application that estimates the age of a person recognized by PeoplePerception by using the information obtained from ** ALPeoplePerception API **, ** ALFaceCharacteristics API **. is. Even if the function is not provided in the box library, many functions can be realized by accessing the API using ʻALProxy, ʻALMemory
.
Recommended Posts