I wanted something to study computer vision, so I made a robot like this. It is linked to a YouTube video, so please click on the image. You can see how it works.
Specifications
Left and right motors rotate back and forth and left and right
5 Robot arm equipped with servo
A small camera with a diameter of 8 mm is mounted on the tip of the robot arm.
2 Primesense PS1080 sensor (RGB-D camera) that swings with a servo
Hardware part is controlled by Arduino MEGA 2560
Equipped with a small PC (Windows 10) with Intel Atom processor, HTTP server runs
Remote control of the robot via HTTP protocol
Control this robot using python over Wi-Fi. This is an attempt to acquire camera images, RGB-D images, etc., analyze the images, and move them.
In making the robot, I disassembled and modified a robot vacuum cleaner like Roomba as a base and used it. It is a low-priced robot vacuum cleaner sold in Nitori. If it's so cheap, it's okay to disassemble and remodel it. On top of that is a robot arm, an Arduino, an RGB-D camera, and a small PC with an Intel Atom processor that controls the hardware installed in the robot. Arduino uses MEGA 2560 because it wants to control a lot of servos. UNO, which I often see, is a little insufficient, and I wonder if it's heartbreaking considering future expansion. The small PC has a model number of DG-M01IW, and it has a battery that lasts for 13 hours even though it is the size of a paperback book. Built-in. It's the best match for this robot, and I bought 2 of them.
The software on the robot side is roughly divided into firmware-like software that is installed on Arduino and server-like software that configures an HTTP server that is installed on a PC (Windows 10).
Arduino Arduino also needs power. It uses a 7.2V battery that is available for radio control.
The servo control that supports the robot arm and RGB-D camera is PWM control by Arduino. For the motor used to move the robot, an I2C-controlled motor controller was externally attached, and a circuit was built to control the motor of the robot vacuum cleaner via I2C. This allows the robot to move back and forth, left and right, and rotate.
Serial communication with PC is performed by UART. On the Arduino side, we have prepared the following protocols for control and information acquisition.
By issuing the above protocol from the PC through the Serial port, it is possible to freely control the servo and acquire information.
For example, by transmitting ʻa0 + 90, servo # 0 can be moved to an angle of + 90 °, or by
m + 255 + 255`, the left and right moving motors can be rotated in the forward direction. Of course, such details such as servo speed control are done on the Arduino side.
PC (DG-M01IW) The PC has a built-in battery, so it runs without an external battery. The basic OS uses the pre-installed Windows 10 Home as it is. Arduino, RGB-D camera, camera at the tip of the robot arm, etc. are connected via USB. In addition, since Bluetooth and Wi-Fi are naturally built-in, Wi-Fi-based network control is possible only with this PC.
Communication with Arduino is done using UART Serial communication. On the PC side, the following tasks are the main tasks.
Depending on the URL accepted on the HTTP server side, the camera image is returned and the servo is controlled via Arduino.
For example, return image / jpeg for a GET request for http: // ..../ sensor / color
, or for a GET request forhttp: // ..../ arm0? cmd = ...
Moves the robot arm according to the cmd parameter.
In short, it is the job of the PC to interpret the request received by HTTP, send a command to Arduino as it is, and return the image / depth information. This PC is small enough to be mounted on a robot and is battery-powered, so it does not perform calculations that increase the CPU load as much as possible. In this area, I think that it is a job that can be done even with Raspberry Pi, but unexpectedly the bandwidth around USB becomes a problem, especially for RGB-D cameras, Raspberry Pi is heavy.
After a lot of trial and error, I implemented the HTTP server in C ++. The reason is as follows.
socket ()
or ʻaccept ()` anymore).In fact, if the OS is based on Linux, I think it would be easier and quicker to make it even if it was based on C / C ++. Especially around HTTP. However, it was a little troublesome to install Linux on a mini PC equipped with an Intel Atom processor (*), so I gave up and developed it based on Windows 10. (But thanks to that, I could use Visual Studio for debugging etc. and it might have been easier as a result)
*: Just because I couldn't find a Linux distribution that supports 32-bit EFI booting w I didn't want to have a hard time around here ... I thought. I really wanted to include Ubuntu.
Up to this point, the robot can be controlled completely wirelessly via the Wi-Fi network, and the side that controls the robot by image analysis is not tied to any OS type or language.
My main PC is Mac Book Air Mid 2012, so I definitely want to go there with a Mac. So, I think about using numpy, OpenCV 2.x, (+ TensorFlow), etc. based on python. Requests is available for HTTP relations.
Currently, it is like searching for a red object and chasing it with the method described in Qiita: "Let's recognize red objects with python". I'm building logic. Eventually, I plan to bring it to the point where it can be autonomously grasped by the robot arm.
I would like to make full use of the method of clustering images using the depth data obtained from the RGB-D camera and the point cloud processing using the Point Cloud Library. I've implemented various logics of this kind based on C ++ / Objective-C, but I'm impressed with the destructive power of python + numpy every day. I wish I had moved to python + numpy earlier w
From now on, while porting the logic implemented so far to python, I plan to utilize these to increase the depth of robot control. I would like to write those articles when I get some results.
Recommended Posts