This article is a front and back part. If you haven't built the ArUco library, Use OpenCV_Contrib (ArUco) in Java! (Part 1-Build).
Also, the source code used this time is mostly in OpenCV_ArucoTest [GitHub], so I would appreciate it if you could refer to it.
This article describes the program when using the ArUco library in Java.
I was thinking of writing it sooner, but I was late. (I started writing the night before New Year's Eve ...) For the time being, I am registered for OpenCV Advent Calendar 2018. (What did you say after posting on New Year 2019)
Follow the procedure for importing an existing OpenCV with the jar file included in Contrib.
To make sure the ArUco library is properly built and set up Let's generate a Marker image.
The function to use is ・ Aruco.drawMarker (dictionary, markerID, sidePixels, markerImage) dictionary: Determines the type of Marker. The size, resolution, and number of types of squares are different. markerID: The ID of the Marker defined in the Dictionary, each with a different shape. sidePixels: Determines the resolution.
Source code below (Details: createMarker [GitHub])
public static void createMarker() {
Dictionary dictionary = Aruco.getPredefinedDictionary(Aruco.DICT_4X4_50);
final int markerID = 0;
final int sidePixels = 200;
Mat markerImage = new Mat();
Aruco.drawMarker(dictionary, markerID, sidePixels, markerImage);
Imgcodecs.imwrite("F:\\users\\smk7758\\Desktop\\marker_2018-12-01.png ", markerImage);
}
You should get something like this.
Well, it's the production from here.
The following two functions are used
・ Aruco.detectMarkers (inputImage, dictionary, corners, markerIds, parameters);
・ Aruco.drawDetectedMarkers (inputImage, corners, markerIds);
Well, as with the function, detectMarkers recognizes multiple markers.
After that, using corners, drawDetectedMarkers does something like tracing the four corners of Marker.
corners: Returns the on-screen coordinates of Marker. (The reason why List
(Other arguments omitted)
Source code below (Details: detectMarker [GitHub])
public static void detectMarker() {
Dictionary dictionary = Aruco.getPredefinedDictionary(Aruco.DICT_4X4_50);
Mat inputImage = Imgcodecs.imread("F:\\users\\smk7758\\Desktop\\marker_2018-12-01_test.png ");
List<Mat> corners = new ArrayList<>();
Mat markerIds = new Mat();
DetectorParameters parameters = DetectorParameters.create();
Aruco.detectMarkers(inputImage, dictionary, corners, markerIds, parameters);
Aruco.drawDetectedMarkers(inputImage, corners, markerIds);
Imgcodecs.imwrite("F:\\users\\smk7758\\Desktop\\marker_2018-12-01_detected.png ", inputImage);
}
It was transformed with GIMP, but in the image above,
It should look like this.
To be honest, when I came here, it was almost self-satisfaction, but next I combined JavaFX and VideoCapture (OpenCV) to perform real-time recognition.
The difference is that it runs on JavaFX, so using the ScheduledService, the value returned is of type Image. ScheduledService is a class used when performing multi-thread processing with JavaFX, and it seems that Task can be automatically restored. See the official JavaDoc for details. Also, the reason for returning with Image type is that the element that displays the image defined on the Controller class side is the argument. This is to take an image. For the one that converts Mat to Image, I used the one that came out after proper research.
Source code below (Details: [detectMarkerByCamera --MarkerDetectorService.java [GitHub]](https://github.com/smk7758/OpenCV_ArucoTest/blob/master/test-2018-12-01_OpenCV_Contrib_Aruco_3_detectMarkerByCamera/src/com/github/smke0 /MarkerDetectorService.java)))
@Override
protected Task<Image> createTask() {
return new Task<Image>() {
@Override
protected Image call() throws Exception {
if (!vc.isOpened()) {
System.err.println("VC is not opened.");
this.cancel();
return null;
}
Mat inputImage = new Mat();
if (!vc.read(inputImage) || inputImage == null) {
System.err.println("Cannot load camera image.");
this.cancel();
return null;
}
List<Mat> corners = new ArrayList<>();
Mat markerIds = new Mat();
// DetectorParameters parameters = DetectorParameters.create();
Aruco.detectMarkers(inputImage, dictionary, corners, markerIds);
Aruco.drawDetectedMarkers(inputImage, corners, markerIds);
return convertMatToImage(inputImage);
}
};
}
private Image convertMatToImage(Mat inputImage) {
MatOfByte byte_mat = new MatOfByte();
Imgcodecs.imencode(".bmp", inputImage, byte_mat);
return new Image(new ByteArrayInputStream(byte_mat.toArray()));
}
It should look like this.
Also, [detectMarker (Center) CoordinatesByCamera [GitHub]](https://github.com/smk7758/OpenCV_ArucoTest/blob/master/test-2018-12-01_OpenCV_Contrib_Aruco_3_detectMarkerCoordinatesByCamera/src/com/github/smk7758/github/smk7758/ In), the function on the OpenCV side is used to find the coordinates of the center (center of gravity) from the points at the four corners of Marker and draw the points.
It is posture estimation that you want to do the most.
Well, I would like to do it immediately, but in fact there is something to do before estimating the posture. It's called camera calibration. The following function cannot be executed without the Mat obtained here, so refer to I did OpenCV camera calibration with Java. I want you to do it. (Actually, it took a couple of weeks to develop from the previous chapter. This is the cause of the delay)
Let's start with the function again. ・ Aruco.estimatePoseSingleMarkers (corners, 0.05f, cameraMatrix, distortionCoefficients, rotationMatrix, translationVectors); ・ Aruco.drawAxis (inputImage, cameraMatrix, distortionCoefficients, rotationMatrix, translationVectors, 0.1f);
As the name suggests, the above function estimates Marker's attitude and returns a rotation matrix and a translation vector. And the function below draws the Axis, or coordinate axis.
Source code below (Details: detectMarkerPoseByCamera [GitHub])
List<Mat> corners = new ArrayList<>();
Mat markerIds = new Mat();
// DetectorParameters parameters = DetectorParameters.create();
Aruco.detectMarkers(inputImage, dictionary, corners, markerIds);
Aruco.drawDetectedMarkers(inputImage, corners, markerIds);
Mat rotationMatrix = new Mat(), translationVectors = new Mat(); //receive
Aruco.estimatePoseSingleMarkers(corners, 0.05f, cameraMatrix, distortionCoefficients,rotationMatrix, translationVectors);
for (int i = 0; i < markerIds.size().height; i++) { // TODO
Aruco.drawAxis(inputImage, cameraMatrix, distortionCoefficients, rotationMatrix, translationVectors, 0.1f);
}
It should look like this.
It took me some time, but it started working for the time being. It was especially difficult to understand camera calibration ... I would like to continue to devote myself to the Java version of OpenCV.
(I enjoyed it at the end of 2018) [Although the article itself was written at the beginning of 2019]
Also, since this article is written by a beginner, if you have any mistakes or bad points, please let us know in the comments or on Twitter.
The formula was the easiest to understand.
-ArUco marker detection (aruco module) --OpenCV official ・ Detection of ArUco Markers --OpenCV Official ・ Pose estimation with Opencv Aruco Part 1-Machine learning memorandum ・ Pose estimation with Opencv Aruco Part 2-Machine learning memorandum -OpenCV aruco marker-Personal notes related to the program ・ [Try using the aruco module --atinfinity / lab (GitHub)](https://github.com/atinfinity/lab/wiki/aruco%E3%83%A2%E3%82%B8%E3%83%A5 % E3% 83% BC% E3% 83% AB% E3% 82% 92% E4% BD% BF% E3% 81% A3% E3% 81% A6% E3% 81% BF% E3% 82% 8B) ・ Camera position and orientation estimation using markers (OpenCV + ArUco) -How to make a flying robot ・ Investigating ArUco, a lightweight AR library ・ For myself: OpenCV3.4.3 camera calibration
Recommended Posts