Feature matching is the matching of features extracted from different images.
It is a technology that appears in.
The following libraries are provided in OpenCV.
This time, using OpenCV 3 + Python 3, we will try to match the features of the rotated and zoomed image as shown below.
** Draw matching result **
OpenCV (Open Source Computer Vision Library) is a collection of BSD-licensed video / image processing libraries. There are many algorithms for image filtering, template matching, object recognition, video analysis, machine learning, and more.
■ Example of motion tracking using OpenCV (OpenCV Google Summer of Code 2015) https://www.youtube.com/watch?v=OUbUFn71S4s
■ Click here for installation and easy usage Install OpenCV 3 (core + contrib) in Python 3 environment & Difference between OpenCV 2 and OpenCV 3 & simple operation check
■ Click here for still image processing Try edge detection with OpenCV Perform various filters with OpenCV (gradient, high-pass, Laplacian, Gaussian) Extract feature points with OpenCV (AgastFeature, FAST, GFTT, MSER, AKAZE, BRISK, KAZE, ORB, SimpleBlob) Face recognition using OpenCV (Haar-like feature classifier) Estimate who's face using OpenCV (Eigenface, Fisherface, LBPH) Recognize the contour and direction of a shaped object with OpenCV (principal component analysis: PCA, eigenvectors)
■ Click here for video processing Try converting videos in real time with OpenCV Try converting webcam / camcorder video in real time with OpenCV Draw optical flow in real time with OpenCV (Shi-Tomasi method, Lucas-Kanade method) Object tracking using OpenCV (tracking feature points specified by mouse by Lucas-Kanade method Motion template analysis using OpenCV (recognizes objects and their moving directions in real time)
A-KAZE
KNN、Brute-Force、FLANN KNN (K-Nearest Neighbor algorithm) is an algorithm that selects K nearest neighbor labels from the search space and assigns class labels by majority vote. Learning simply stores the training data as it is. It operates at high speed because the learning cost is zero. A representative player of the laziness learning algorithm. OpenCV supports the brute-force method (Brute-Force) and the fast approximate neighbor search method (FLANN) as a method for searching K nearest neighbor labels from the search space.
Earlier I wrote that there is a tutorial that doesn't work on OpenCV 3 (link), but FLANN feature point matching is my environment (OpenCV) It didn't work on 3.1.0 + Python 3.5.2 + Windows 10).
> matches = flann.knnMatch(des1,des2,k=2)
>
\ # The following error occurs # error: C:\dev\opencv-3.1.0\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
FLANN works fine in C ++ and OpenCV2, so if you want to use FLANN, run it in C ++ or OpenCV2 environment. When working with a combination of OpenCV 3 and Python 3, currently the "brute-force method" is used.
knn.py
# -*- coding: utf-8 -*-
import cv2
#Image 1
img1 = cv2.imread("img1.jpg ")
#Image 2
img2 = cv2.imread("img2.jpg ")
# A-Generation of KAZE detector
akaze = cv2.AKAZE_create()
#Feature detection and feature vector calculation
kp1, des1 = akaze.detectAndCompute(img1, None)
kp2, des2 = akaze.detectAndCompute(img2, None)
# Brute-Force Matcher generation
bf = cv2.BFMatcher()
#Brute feature vectors-Matching with Force & KNN
matches = bf.knnMatch(des1, des2, k=2)
#Thin out data
ratio = 0.5
good = []
for m, n in matches:
if m.distance < ratio * n.distance:
good.append([m])
#Draw corresponding feature points
img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, good[], None, flags=2)
#Image display
cv2.imshow('img', img3)
#Press the key to finish
cv2.waitKey(0)
cv2.destroyAllWindows()
The Ratio Test proposed by D. Lowe thins out the matching results and displays them.
lowe.py
#Thin out data
ratio = 0.5
lowe = []
for m, n in matches:
if m.distance < ratio * n.distance:
lowe.append([m])
#Draw corresponding feature points
img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, lowe[], None, flags=2)
This method gives you a visual idea of the matching status of the entire image. There seems to be no false positives in this test.
Let's change a part of the code so that only those with good matching status between features are displayed.
knn_good.py
#Sort features according to matching status
good = sorted(matches, key = lambda x : x[1].distance)
#Draw corresponding feature points
img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, good[:30], None, flags=2)
A different feature than before was drawn. This time, since the features are dispersed, they match as a whole, but in the case of an image in which the feature points are partially biased, those with good matching conditions will be concentrated there. ..
attribute | Contents |
---|---|
pt | Point (x, y) |
size | Feature point diameter |
angle | [0, 360)The angle of the range. The y-axis is downward and clockwise. If it cannot be calculated-1。 |
response | Strength of feature points |
octave | Pyramid layer that detected feature points |
class_id | ID of the class to which the feature point belongs |
The following items are stored in DMatch as a result of matching between features.
attribute | Contents |
---|---|
distance | Distance between features. The closer the distance, the better the match. |
trainIdx | Training feature index |
queryIdx | Query feature index |
imgIdx | Training image index |
Recommended Posts