It's a little old technology, but you can see it in the self-position estimation of the robot system. I want to try AR markers I tried to recognize ArUco with the obtained Raspberry Pi.
It looks like this to see the execution result. (https://www.youtube.com/watch?v=aepFM_JsxbU) The outline of the marker and the ID and xyz axes are displayed. Because you can also get pitch roll yaw It can be used for various purposes.
environment ・ RasPi4 (It should work with RasPi3) ・ USB camera (Logitech) → Raspi cam is also available.
First, set up according to the following article by "Karaage". OpenCV is required for recognition. It was very smooth. I am always grateful. -[How to build an image recognition environment with deep learning of Raspberry Pi 4 from zero to 1 hour] (https://karaage.hatenadiary.jp/entry/rpi4-dl-setup)
Install additional packages As mentioned in the article, I put everything in once. ・ [Setup for reinforcement learning of Raspberry Pi 4] (https://note.com/npaka/n/n034c8ee6e5cc)
Now you are ready.
Marker generation → See the link below for generation. ・ (Camera calibration) → If you want to move it for the time being, put it off. ・ Marker recognition ・ Posture estimation of AR marker with python
・ You should be able to use it by copying.
ARdetect.py
#!/usr/bin/env python
# -*- coding: utf-8 -*
import numpy as np
import cv2
from cv2 import aruco
def main():
cap = cv2.VideoCapture(1) #Change the value depending on the camera used
#Marker size
marker_length = 0.056 # [m]
#Marker dictionary selection
dictionary = aruco.getPredefinedDictionary(aruco.DICT_4X4_50)
#camera_matrix = np.load("mtx.npy")
#distortion_coeff = np.load("dist.npy")
#If you have calibrated the camera, use the above.
camera_matrix = np.array( [[1.42068235e+03,0.00000000e+00,9.49208512e+02],
[0.00000000e+00,1.37416685e+03,5.39622051e+02],
[0.00000000e+00,0.00000000e+00,1.00000000e+00]] )
distortion_coeff = np.array( [1.69926613e-01,-7.40003491e-01,-7.45655262e-03,-1.79442353e-03, 2.46650225e+00] )
while True:
ret, img = cap.read()
corners, ids, rejectedImgPoints = aruco.detectMarkers(img, dictionary)
aruco.drawDetectedMarkers(img, corners, ids, (0,255,255))
if len(corners) > 0:
#Process by marker
for i, corner in enumerate(corners):
rvec, tvec, _ = aruco.estimatePoseSingleMarkers(corner, marker_length, camera_matrix, distortion_coeff)
#Remove unnecessary axes
tvec = np.squeeze(tvec)
rvec = np.squeeze(rvec)
#Convert from rotation vector to rodorigues
rvec_matrix = cv2.Rodrigues(rvec)
rvec_matrix = rvec_matrix[0] #Extracted from rodorigues
#Transpose of translation vector
transpose_tvec = tvec[np.newaxis, :].T
#Synthetic
proj_matrix = np.hstack((rvec_matrix, transpose_tvec))
#Conversion to Euler angles
euler_angle = cv2.decomposeProjectionMatrix(proj_matrix)[6] # [deg]
print("ID : " + str(ids[i]))
#Visualization
draw_pole_length = marker_length/2 #Real length[m]
aruco.drawAxis(img, camera_matrix, distortion_coeff, rvec, tvec, draw_pole_length)
cv2.imshow('drawDetectedMarkers', img)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
Please use it when it is troublesome to generate.
Recommended Posts