This article has been confirmed to work in the following environments.
version | |
---|---|
NAOqi | 2.5.5.5 |
Choregraphe | 2.5.5.5 |
Pepper has a mechanism to easily recognize the QR code. When the QR code is recognized, the event BarcodeReader / BarcodeDetected
is fired and the recognized result can be used.
Select a memory event and connect it to the box you want to receive
QR code recognition is easy to implement, but users are worried whether they are holding it up well because they cannot see in real time what the QR code they hold up to Pepper's eyes. It should be easier to use if you can display the image of the camera on the display and check how the image is shown on the Pepper side in real time. (This method is often used for robot apps created by SoftBank Robotics.) However, unlike QR code recognition, camera preview is not easy to achieve. SoftBank Robotics Ryutaro Murayama, Satoshi Tanizawa, and Kazuhiko Nishimura have extracted the relevant parts from "Pepper Programming Basic Operations to App Planning and Direction" and created a simplified sample, so I would like to explain the mechanism.
Please download the sample program from GitHub. https://github.com/Atelier-Akihabara/pepper-qrpreview-example
When you run the sample, you will see something like the following.
--When the QR code is recognized, the app will be closed.
The program has the following flow.
a. To use the display, use [Show App] to display HTML. b. Access the Pepper camera. c. Get the subscriberId to receive the data. d. Pass the received subscriberId to the display side via ALMemory.
Create a handler to receive the subscriberId sent from Pepper side. b. Receive the camera image data using the subscriberId received by the created handler. c. Draw the received camera image data using Canvas. Execute d. b. C. as many times as necessary.
We will follow the process from the Pepper side. On the Pepper side, the point is to pass the subscriberId required for image acquisition to the display side.
1-a. To use the display, use [Show App] to display the following HTML (index.html).
index.html
<!DOCTYPE html>
<html lang="ja">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="initial-scale = 1, minimum-scale = 1, maximum-scale = 1" />
<link rel="stylesheet" type="text/css" href="main.css">
<script src="/libs/qimessaging/2/qimessaging.js"></script>
<script src="scripts/jquery-2.2.2.min.js"></script>
<script src="scripts/adjust23.js"></script>
<script src="scripts/main.js"></script>
</head>
<body onLoad="startSession();">
<div class="container">
<div id="camera" class="panel inactive">
<div class="canvas-container">
<div class="canvas-background">
<canvas id="canvas" width="80" height="60"></canvas>
</div>
</div>
</div>
</div>
<div class="abort-container inactive" id="common">
<div class="abort-button" id="abort">Suspension</div>
</div>
</body>
</html>
What you should pay attention to here is <canvas id =" canvas "width =" 80 "height =" 60 "> </ canvas>
. The camera image is drawn in this area.
<script src =" scripts / main.js "> </ script>
contains the subsequent drawing process. startSession ()
is the start function of the drawing process. The point is that you need a box (canvas) to display it.
1-b, c. See [root / QR Preview / Subscribe Camera].
python
self.subscriberId = self.videoDevice.subscribeCamera("QRPreview",
0, # TopCamera
7, # AL:kQQQVGA
0, # AL::kYuvColorSpace
fps)
ʻALVideoDevice :: subscribeCamerais called to get
subscriberId`. The arguments are organized in the following order:
name | Contents |
---|---|
Name | name |
CameraIndex | Camera index (0 or forehead camera) |
Resolution | Constant indicating the resolution of the camera (constant AL:7 corresponding to kQQQVGA, QQQVGA is 80x60 pixels) |
ColorSpace | Image format obtained from the camera (constant AL::0 corresponding to kYuvColorSpace, YuvColorSpace is only the luminance signal of YUV color space) |
Fps | Required frame rate |
Refer to the following for the constants set in various parameters. http://doc.aldebaran.com/2-5/family/pepper_technical/video_pep.html
The return value of subscribeCamera
will be in self.subscriberId
. You can now access your camera data.
Supplement: Constants starting with ʻAL :: k` are expressed in the C ++ language, so they need to be concrete numbers on the Python side.
1-d. Please continue to see [root / QR Preview / Subscribe Camera].
python
self.memory.raiseEvent("ProgrammingPepperSample/PreviewMode", self.subscriberId)
With this code, pass the subscriberId
obtained earlier via ʻALMemoryto the display side. On the display side, this
subscriberId` is used to extract data for drawing from the camera.
It's finally the display side. Since drawing processing cannot be performed from the Pepper side, pixel processing is required on JavaScript on the display side.
2-a. See main.js.
main.js
ALMemory.subscriber("ProgrammingPepperSample/PreviewMode").then(function(subscriber) {
subscriber.signal.connect(function(subscriberId) {
if(subscriberId.length > 0) {
console.log("Subscribing...: " + subscriberId)
previewRunning = true
activatePanel("camera")
activatePanel("common")
getImage(ALVideoDevice, subscriberId)
}else{
previewRunning = false
inactivatePanel("camera")
inactivatePanel("common")
}
})
});
Here, the point is to let getImage (ALVideoDevice, subscriberId)
process. When " ProgrammingPepperSample / PreviewMode "
flies, getImage (ALVideoDevice, subscriberId)
is called.
Other functions called internally are just switching elements on the HTML.
2-b, c.
It is finally the drawing process. It is processed by the function getImage ()
.
main.js
function getImage(ALVideoDevice, subscriberId) {
ALVideoDevice.getImageRemote(subscriberId).then(function (image) {
if(image) {
var imageWidth = image[0];
var imageHeight = image[1];
var imageBuf = image[6];
console.log("Get image: " + imageWidth + ", " + imageHeight);
if (!context) {
context = document.getElementById("canvas").getContext("2d");
}
if (!imgData || imageWidth != imgData.width || imageHeight != imgData.height) {
imgData = context.createImageData(imageWidth, imageHeight);
}
var data = imgData.data;
for (var i = 0, len = imageHeight * imageWidth; i < len; i++) {
var v = imageBuf[i];
data[i * 4 + 0] = v;
data[i * 4 + 1] = v;
data[i * 4 + 2] = v;
data[i * 4 + 3] = 255;
}
context.putImageData(imgData, 0, 0);
}
if(previewRunning) {
setTimeout(function() { getImage(ALVideoDevice, subscriberId) }, 100)
}
})
}
Receive image data from Pepper with ʻALVideoDevice.getImageRemote`. Image data (image) is passed in the following structure.
Contents | |
---|---|
[0] | Horizontal size |
[1] | Vertical size |
[2] | Number of layers |
[3] | Color space |
[4] | Time stamp(seconds). |
[5] | Time stamp(microseconds). |
[6] | Binary array of image data |
[7] | Camera ID |
[8] | Camera left angle information(radian). |
[9] | Angle information on the camera(radian). |
[10] | Camera right angle information(radian). |
[11] | Angle information under the camera(radian). |
The required information is the information in [6], that is, the image data.
Here, the initialization (creating a blank image) for drawing on the canvas is performed.
python
if (!context) {
context = document.getElementById("canvas").getContext("2d");
}
if (!imgData || imageWidth != imgData.width || imageHeight != imgData.height) {
imgData = context.createImageData(imageWidth, imageHeight);
}
Then draw the data through putImageData.
python
context.putImageData(imgData,x,y);
x and y are the coordinates to start drawing. Since the start point is 0, 0, specify 0 for x and y. In imageData, the array of data to be drawn will be put in the array in the order of RGBA. R is red, G is green, B is blue, and A is the alpha channel. The camera data obtained from Pepper is brightness information, so if you enter the same value for each RGB, you can obtain a monochrome image. Since A is an alpha channel, enter 255 so that it is not transparent.
python
for (var i = 0, len = imageHeight * imageWidth; i < len; i++) {
var v = imageBuf[i];
data[i * 4 + 0] = v;
data[i * 4 + 1] = v;
data[i * 4 + 2] = v;
data[i * 4 + 3] = 255;
}
You can now draw a still image above. 2-d. The image needs to be constantly updated.
python
setTimeout(function() { getImage(ALVideoDevice, subscriberId) }, 100)
It operates recursively and draws.
Have you grasped the flow of displaying the camera preview on Pepper's display? The point is to get the subscriberId
and perform the drawing process on the display side.
The sample has a very low resolution of QQQVGA (80x60), but you can also get images at a higher resolution. Also, although the sample was monochrome, you can also get a color image by changing the format and drawing routine.
However, it is JavaScript on HTML that performs the drawing process. If you draw at high resolution or perform color processing, the processing may not be able to keep up. Try to find a setting that works comfortably on the created app.