This is the article on the 23rd day of 3D Sensor Advent Calendar 2019.
I'm a beginner with 3D sensor history for about 3 months, but since I have AzureKinect at hand, this is the 4th Advent Calendar that I thought I'd try my best. I wrote an article once a week, Qiita, and managed to get here. This time, as a goal, I would like to try using the Azure Kinect Sensor SDK at the beginner level to connect and work with Python.
As for the environment, the above starts with Windows 10 and Azure Kinect Sensor SDK 1.2.0 installed. (Note that it is not 1.3.0!)
First, install Python. There are many ways to do this, but if you're a beginner and want to install on Windows, are there:
All of them are easy to install, but for the time being, I will install them with the latest version of Miniconda (Miniconda3 4.7.12 at the moment).
It is not recommended to set it in the environment variable, so check the setting to put in the registry below and install it
Next, to use Azure Kinect with Python, it seemed easy to install a framework called Open3D.
Open3D is an open source library that supports the development of software that handles 3D data.
Open3D is C++And provides a Python front end with carefully selected data structures and algorithms
It can be used in either environment.
... apparently ... Actually, this library is the latest version 0.8.0 and supports "Azure Kinect". With this, you can work.
Open3D 0.8.0 supports Python 2.7, 3.5, 3.6 as below, but it also worked with the latest 3.7.4. (Open3D works on Mac, but unfortunately AzureKinect modules seem to be only Window and Ubuntu, sorry!) For the time being, if you use the following command, you can enter it in one shot.
(base) D:\>conda install -c open3d-admin open3d
--Run Open3D sample
For the time being, you can check if Open3D is recognized correctly with the following command
python -c "import open3d"
The above does not return anything, but if there is no error, it is successful.
Next, let's run the sample module. If this is okay, the Open3D installation is successful. Please download the sample module from the following github. https://github.com/intel-isl/Open3D
The download works as if it was placed directly under D. If you want to run the sample module, you need a tool called matplotlib. This can be done immediately with the conda install command.
(base) D:\>cd D:\Open3D-master\examples\Python\Basic
(base) D:\Open3D-master\examples\Python\Basic>conda install matplotlib
(base) D:\Open3D-master\examples\Python\Basic>python rgbd_redwood.py
Hopefully you will see a screen like the one below.
Now, let's run the favorite AzureKinect module! Here is a sample to run azure_kinect_viewer.py
(base) D:\Open3D-master>python examples/Python/ReconstructionSystem/sensors/azure_kinect_viewer.py --align_depth_to_color
However, there is a caveat here. -It seems that the folder name in the program called SDK1.2.0 is directly referenced, and it does not work with SDK1.3.0 (It may be possible if you rename it to the folder name 1.3.0, but in that case environment variables etc. Needs change) -In my environment, I got the following error.
File "examples/Python/ReconstructionSystem/sensors/azure_kinect_viewer.py", line 72, in <module>
v.run()
File "examples/Python/ReconstructionSystem/sensors/azure_kinect_viewer.py", line 36, in run
vis.update_geometry()
TypeError: update_geometry(): incompatible function arguments. The following argument types are supported:
1. (self: open3d.open3d.visualization.Visualizer, arg0: open3d.open3d.geometry.Geometry) -> bool
Invoked with: VisualizerWithKeyCallback with name viewer
I'm not sure because it's a part that seems to cause no error even if I check it, so for the time being I can only guess that update_geometry () is passing some strange parameters every time it updates while. For the time being, line 36 vis.update_geometry() By deleting, the screen cannot be updated, but only the operation was confirmed.
azure_kinect_viewer.py excerpt
vis_geometry_added = False
while not self.flag_exit:
rgbd = self.sensor.capture_frame(self.align_depth_to_color)
if rgbd is None:
continue
if not vis_geometry_added:
vis.add_geometry(rgbd)
vis_geometry_added = True
vis.update_geometry() <==Delete this part
vis.poll_events()
vis.update_renderer()
Then, although it is a still image, I was able to confirm the operation of Azure Kinect.
Actually, there is a color photo on the left, but since the room was dirty, only Depth ... By the way, the above picture is a picture of the Depth image with the camera corrected. As I introduced in Part 3, Azure Kinect has a function that already applies correction with the internal camera parameter, and you can get the corrected state. Before the correction, you can get a picture like this.
It is helpful to be able to easily do it with Azure Kinect because it is not possible to align correctly in 3D unless it is in the corrected state.
... By the way, this Open3D is very easy. Especially since the drawing process has only 3 lines (4 lines including comments), the total script is very small. This part below is really easy.
python
vis = o3d.visualization.VisualizerWithKeyCallback()
vis.register_key_callback(glfw_key_escape, self.escape_callback)
vis.create_window('viewer', 1920, 540)
print("Sensor initialized. Press [ESC] to exit.")
I managed to display it in Python ...
――This time, I expected it to work not only on Python but also on Mac, but that didn't work. --Open3D, a library that looks promising. However, probably because it is growing at a very high speed, I feel that there are some simple defects. I'm looking forward to it (it seems that there is an unimplemented description in the document), so I think that it will be improved gradually if it is used. ――Somehow, it seems easy to handle with Python, but the drawing process of Azure Kinect is heavy, the asynchronous handling of the sensor part seems to be difficult, and I am vaguely worried whether it can be controlled by Python. However, connecting with Python seems to be connected to machine learning etc., so it looks good.
So, this time too, it ended with a rudimentary beginning. I couldn't get out of the state of being a beginner of beginners in all four articles. However, I would like to continue to carry out activities that deepen my learning of 3D sensors. I would like to thank Mr. Yurufuwa UNA for providing this place, as well as those who read and supported me. Thank you very much.
Tomorrow's 24th Christmas Eve is pisa-kun's "Let's summarize the RealSense l515 pre-order commemorative lidar". RealSense L515 looks good, doesn't it? When I sell it in Japan, I feel like I have it. Next year, I will continue to do my best with xR and 3D Sensor related technologies.
Recommended Posts