Age recognition using Pepper's API

You can see information on various APIs provided by NAOqi Framework by opening [Help]-[Reference API] of Choregraphe. In this article, I will explain how to use the API using ** ALPeoplePerception API ** and ** ALFaceCharacteristics API ** as examples.

In addition, ** ALPeoplePerception API and ALFaceCharacteristics API do not have a means to check the operation with virtual robots, and an actual Pepper machine is required. ** I would like you to experiment with the actual Pepper machine at Aldebaran Atelier Akihabara. (Reservation URL: http://pepper.doorkeeper.jp/events)

API introduced in this article

ALPeoplePerception API

** People Perception ** API is also available in Pepper Tutorial (6): Touch Sensor, Human Recognition As you can see, it's an API that provides the ability to detect and identify people around Pepper.

When a person is detected by Pepper's sensor, a temporary identification ID is set and that information is stored in an area called ʻALMemory. The application can get the event by ʻALPeoplePerception via ʻALMemory and do something, or use the information stored by ʻALPeoplePerception in ʻALMemory`.

This API provides information related to human recognition, such as:

ALFaceCharacteristics API

** Face Characteristics ** API is an API that analyzes facial information for each person recognized by ʻALPeoplePerception` and obtains additional information such as age, gender, and laughing.

ʻALFaceCharacteristics` provides the following information inferred from facial features:

Each of these values is also provided with a value between 0 and 1 to indicate the degree of confidence. In addition, the following values are provided.

How to use API

First, when using these APIs, we will organize how to access the APIs. There are two main ways to use the various functions provided by the API.

  1. Module method call via ʻALProxy`
  2. Get event and value via ʻALMemory`

Here, we will give an overview of each method.

Module method invocation via ALProxy

Many of the APIs, such as ʻALPeoplePerception and ʻALFaceCharacteristics, are provided in the form of ** modules **.

Modules are provided by Python classes, and methods provided by the API can be accessed via the ʻALProxy class. For example, to call the ʻanalyzeFaceCharacteristics method provided by ʻALFaceCharacteristics`, write the following Python script.

faceChar = ALProxy("ALFaceCharacteristics")
faceChar.analyzeFaceCharacteristics(peopleId)

When executing a Python script in Pepper like Python Box, specify a string indicating the module name in ʻALProxy as an argument. This will give you a ʻALProxy instance for accessing the module. This allows the application to call the methods provided by the API.

Get events and values via ALMemory

When I introduced Memory Event, I introduced the event processing by key, but the information collected by various APIs of Pepper is It is once aggregated in a mechanism called ʻALMemory. The application can perform the following operations on ʻALMemory.

There are the following ways to use ʻALMemory`.

Event monitoring by flow diagram

At the far left of the flow diagram, you can create an input that responds to memory events.

flow-memory-event.png

For details on how to operate, refer to People approaching.

Value acquisition by box library, event monitoring

By using the box in Memory of the advanced box library, it is possible to acquire and set values, monitor memory events, etc. by combining boxes.

memory-boxes.png

In this article, I will show you how to use the box by taking ** Get a list of people ** as an example.

Value acquisition by Python script, event monitoring

Since ʻALMemory is also implemented as the module described above, it can be accessed via ʻALProxy as follows.

memory = ALProxy("ALMemory")
ageData = memory.getData("PeoplePerception/Person/%d/AgeProperties" % peopleId)

In this article, we will use ** Age Estimate ** as an example to get the value.

Try to use

Sample project location

The projects introduced here are available at https://github.com/Atelier-Akihabara/pepper-face-characteristics-example.

You can use the sample project by getting the file from GitHub https://github.com/Atelier-Akihabara/pepper-face-characteristics-example. There are several ways to get it, but one of the easiest ways is to get the archive from the Download ZIP link.

The resulting file has a folder showing two projects, ** visible-people-list ** and ** get-age **. There is a file with the extension .pml in this folder, so if you double-click it to open it, Choregraphe will start and you will be able to use the project.

I will explain the points for each sample project.

ALPeoplePerception: Get a list of people

First, let's check the behavior when Pepper recognizes a person by using the PeoplePerception / VisiblePeopleList event to ** get a list of people who are currently visible from Pepper **. I will.

The sample project is ** visible-people-list **. Of the files obtained from GitHub, you can open them by double-clicking on visible-people-list.pml in the visible-people-list folder. It is created by the following procedure.

  1. Click the ** Add Memory Event [+] button on the left side of the flow diagram **, enter ** PeoplePerception [A] ** in the filter, and check ** PeoplePerception / VisiblePeopleList [B] **.

    add-memory-event.png

  2. Place the following boxes on the flow diagram

  1. Set the connections and parameters for each box as follows:

    visible-people-list.png

Run this project and run the Log Viewer (http://qiita.com/Atelier-Akihabara/items/7a898f5e4d878b1ad889#-%E5%8F%82%E8%80%83%E3%82%A8%E3%83 % A9% E3% 83% BC% E3% 81% AE% E8% A1% A8% E7% A4% BA% E3% 81% A8% E3% 83% AD% E3% 82% B0% E3% 83% 93 Please check the message that appears in% E3% 83% A5% E3% 83% BC% E3% 82% A2). As a person moves around Pepper, you should see a log similar to the one below.

[INFO ] behavior.box :onInput_message:27 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1790002616__root__AgeDetection_5__Log_1: Get: []
[INFO ] behavior.box :onInput_message:27 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1790002616__root__AgeDetection_5__Log_2: Raised: [179318]

You can see that the PeoplePerception / VisiblePeopleList event provides a list of identifiers of people who are visible to Pepper, in the form of 179318.

Next, let's use ʻALFaceCharacteristics` to get the information of the human face indicated by these identifiers.

ALFaceCharacteristics: Age Estimate

Next, let's get the value PeoplePerception / Person / <ID> / AgeProperties set by the analysis process of ʻALFaceCharacteristics`. Here, as an example, let's try to estimate the age of the person that Pepper finds and say "Are you about?" **.

In the previous example, I got the IDs of all the people that Pepper found, but here, Basic Awareness I will use the function of. We'll use the Trackers> Basic Awarenss box in the standard box library to track people and get the ID of the tracked person from the HumanTracked output.

The sample project is ** get-age **. Of the files obtained from GitHub, you can open them by double-clicking get-age.pml in the get-age folder. It is created by the following procedure.

  1. Create a Get Age box as an empty Python box This time, the input / output configuration is as follows. Please refer to Python Box Concept for how to create it.

    get-age-box.png

  2. Double-click the Get Age box to open the script editor and write a Python script like the one below.

    class MyClass(GeneratedClass):
        def __init__(self):
            GeneratedClass.__init__(self)
    
        def onLoad(self):
            self.memory = ALProxy("ALMemory")
            self.faceChar = ALProxy("ALFaceCharacteristics")
    
        def onUnload(self):
            pass
    
        def onInput_onPeopleDetected(self, peopleId):
            if peopleId < 0:
                return
            r = self.faceChar.analyzeFaceCharacteristics(peopleId)
            if not r:
                self.onUnknown()
                return
            ageData = self.memory.getData("PeoplePerception/Person/%d/AgeProperties" % peopleId)
            self.logger.info("Age Properties: %d => %s" % (peopleId, ageData))
            if ageData and len(ageData) == 2:
                self.onAge(ageData[0])
            else:
                self.onUnknown()
    

This code calls the ʻanalyzeFaceCharacteristics method of ʻALFaceCharacteristics for the ID of the person given from the ʻonPeopleDetectedinput, and if this call is successful, from ʻALMemory`` PeoplePerception / Person / <ID> / AgeProperties. It gets the value ofand calls the output of ʻonAge` with the value indicating the age of this value as an argument.

  1. Place the following boxes
  1. Change the Type of the Say Text box to Number and customize the Say Text box (http://qiita.com/Atelier-Akihabara/items/8df3e81d286e2e15d9b6#%E8%A3%9C%E8%B6% B3say-text% E3% 83% 9C% E3% 83% 83% E3% 82% AF% E3% 82% B9% E3% 81% AE% E3% 82% AB% E3% 82% B9% E3% 82% BF% E3% 83% 9E% E3% 82% A4% E3% 82% BA) to change what you say

    sentence += "you are%About d years old" % int(p)
    
  2. Connect the boxes placed in 1. and 3. as shown below.

    connect-get-age-box.png

  3. Change the contents of the Say box connected to the ʻonUnknown` output to something like "I don't know."

When you run this application, Basic Awareness will be launched and Pepper will turn your face in response to your surroundings. When you find a person, the person's ID is entered in the Get Age box and they say, "Are you about?" Even if you find a person, if the face is difficult to recognize, the recognition process will fail and you will say "I don't know."

In this way, it is possible to realize an application that estimates the age of a person recognized by PeoplePerception by using the information obtained from ** ALPeoplePerception API **, ** ALFaceCharacteristics API **. is. Even if the function is not provided in the box library, many functions can be realized by accessing the API using ʻALProxy, ʻALMemory.

Recommended Posts

Age recognition using Pepper's API
Facial expression recognition using Pepper's API
Category estimation using docomo's image recognition API
Age calculation using python
Extract characters from images using docomo's character recognition API
Speech file recognition by Google Speech API v2 using Python
Test CloudStack API Using Simulator
I tried using Microsoft's Cognitive Services facial expression recognition API
I tried face recognition using Face ++
Try using the Twitter API
Image recognition with API from zero knowledge using AutoML Vision
Upload videos using YouTube API
Try using the Twitter API
Perform handwriting recognition using Pylearn2
Try using the PeeringDB 2.0 API
Use configparser when using API
Stream speech recognition using Google Cloud Speech gRPC API on python3 on Mac!
I tried using docomo speech recognition API and Google Speech API in Java
Try using Janus gateway's Admin API
Get Salesforce data using REST API
Proxy measures when using WEB API
Interactive handwriting recognition app using pygame
Logo detection using TensorFlow Object Detection API
[Python3] Google translate google translate without using api
Try using Pleasant's API (python / FastAPI)
Face recognition using principal component analysis
Get Amazon data using Keep API # 1 Get data
Data acquisition memo using Backlog API
Try using Python argparse's action API
Create API using hug with mod_wsgi
Create a CRUD API using FastAPI
Run Ansible from Python using API
Handwriting recognition using KNN in Python
Image recognition of fruits using VGG16
Try using GCP Handwriting Recognition (OCR)
I tried using the checkio API
Circular object recognition using Hough transform