The article directories
-
- Written in the book of the former
-
- What is corner detection? What’s the use?
- Corner detection in OpenCV
- Harris corner detection
- Shi-tomasi corner detection
- Feature detection, starting with the ORB
- Write in the last
-
- The articles
- download
Written in the book of the former
Unconsciously, rambling about computer vision has also written four, we lead from the most basic noise removal computer graphics technology, and then learned to find the contour of the image, find the edge of the image, and so on, but still did not contact a core problem – features. We must constantly in the field of data science or artificial intelligence constantly hearing the words, such as the content of the outline before we do find, calculating contour area of circumference, found image edge, and so on are all more or less able to in many areas of involving image used to find or create characteristics, but still there is no direct contact with the computer vision characteristics of this concept. Corner detection was the first real contact.
What is corner detection? What’s the use?
Corner detection, as its name suggests, detects corners in images. A corner is essentially a local feature that can be easily located in an image and is widely found in man-made objects such as tables, chairs, cars, houses, etc. (There are many curves in nature, which you can observe while walking). Also useful is the fact that corners can be accurately detected (even with sub-pixel accuracy).
As for the use of corners, essentially, corners are used as a feature of the image. We can imagine a jigsaw puzzle. Obviously, corner information is of great value for image stitching and alignment, and changes in images or objects in images, such as image detection and motion tracking, are also of great value.
Corner detection in OpenCV
Our main tool is Still OpenCV, so we are still going to talk about corner detection in OpenCV. Relatively simple corner detection, OpenCV corner detection mainly Harris corner detection, Shi-Tomasi corner detection. In addition to the general corner detection, there are some more formal feature detection algorithms based on corner detection in OpenCV, such as ORB, ASIFT and so on. These formal feature point detectors are more suitable for direct use. But from the beginning, we still need to start with the most basic corner detector.
Harris corner detection
Harris corner detection comes from the basic hypothesis of human eye corner point recognition, we choose a small window for the image as part of our actual observation, if a dot moving up and down or so after he is bigger, the variance of the surrounding so he should be a corner point, if the direction where a large change in one direction change is small, then he is likely to be a line. So we can get the corner points in the region by setting up a fixed window and comparing its left and right changes.
Also, therefore, we need to set three parameters: the size of the window, the size of the domain (i.e. the corner range, or how big the corner might be), and the threshold. For the Harris algorithm, we can call the cornerHarris algorithm in OpenCV directly, and the three key parameters are
- BlockSize – The size of the field to consider in corner detection.
- Ksize — the window size used in Sobel derivatives
- K — Free parameter in Harris corner detection equation, value parameter is [0.04,0.06].
Here is the code. Here we used an image from OpencV’s Github project. You can download the original image from OpencV’s Github project or download all the code and images from Fongtian’s Github Project — Talking About Computer Vision:
img = cv2.imread('/home/fonttian/Data/image/OpenCV/blox.jpg') gray = cv2.cvtColor(img, Cv2. COLOR_BGR2GRAY) # cornerHarris Gray = NP. Float32 (Gray) blockSize = 2 # Field size ksize = 3 # Aperture size k = 0.04 # Harris DST = Cv2.cornerharris (SRC =gray, blockSize=blockSize, ksize=ksize, k=k) # cv2.cornerHarris(SRC =gray, blockSize=blockSize, ksize=ksize, k=k) # Set the pixel of the image to True, otherwise set it to False. A = DST > 0.01 * dst.max() img[a] = [0, 0, 255] # show cv2.imshow('corners', img)Copy the code
In the above code, we don’t use the focus data directly. Instead, we filter the data and draw the corner points, which avoids the problem of too many corners. In addition, we can simplify our code by using a more suitable corner detector: the goodFeaturesToTrack function. See the following code for details.
Shi-tomasi corner detection
Shi-tomasi corner detection is an improvement of Harris corner detection. Since the stability of Harris corner detection algorithm is related to the empirical value k, the optimal value cannot be obtained directly. However, Shi-Tomasi found that the stability of the corner point was actually related to the smaller eigenvalue of the matrix M, so he directly used the smaller eigenvalue as the fraction, thus creating a new algorithm without adjusting the k value.
The first formula below is Harris’s original fraction formula, The second is the improved fraction formula R= λ 1 λ 2 − K (λ 1+ λ 2) 2 R=\lambda_1\lambda_2- K (\lambda_1+\lambda_2)^2 in Good_Features_to_Track R = lambda lambda 2-1 k (lambda lambda 1 + 2) 2
R = m I n (lambda 1 – lambda. 2) R = min (\ lambda_1 \ lambda_2) R = min (lambda 1 – lambda. 2)
As for shi-Tomasi corner detection, we can directly use goodFeaturesToTrack function in OpenCV to obtain the best points and select shi-Tomasi algorithm as the corner detection method. The specific parameters of this function are as follows:
- MaxCorners: Returns the maximum number of corners. A value of 0 indicates that no maximum limit is set. Returns all detected corners.
- QualityLevel: quality coefficient (a positive number less than 1.0, generally between 0.01 and 0.1), indicating the minimum qualityLevel of an acceptable corner point. This coefficient is multiplied by the best corner fraction (that is, the smaller eigenvalue above) as the minimum acceptable fraction; For example, if the best corner fraction value is 1500 and the mass coefficient is 0.01, then all angles with a mass fraction less than 15 will be ignored.
- MinDistance: Minimum Euclidean distance between angles. Points smaller than this distance are ignored.
- Mask: Optional region of interest, specifying the region where you want to detect corners.
- BlockSize: Default is 3, corner detection neighborhood size (window size)
- UseHarrisDetector: specifies a method for corner detection. If it is true, Harris corner detection is used. If it is false, the Shi-Tomasi algorithm is used. The default is False.
- K: The default value is 0.04, which is used for Harris corner detection but not shi-Tomasi.
What’s special about this function is that it helps us filter points that we set ourselves to DST > 0.01 * dst.max(). The function first takes all right focal points, then ignores and sorts corners below the mass coefficient. After that, the focus with the highest quality was retained, and then the corner points within the minimum distance were deleted, and the steps were carried out in turn to obtain the final N optimal corner points. The code and effects are as follows:
img = cv2.imread('/home/fonttian/Data/image/OpenCV/blox.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
corners = cv2.goodFeaturesToTrack(gray, maxCorners, qualityLevel, minDistance)
# print(len(corners))
# print(corners)
for i in corners:
x, y = i.ravel()
cv2.circle(img, (x, y), 2, (0.0.255), -1)
cv2.imshow('Shi-Tomasi', img)
Copy the code
Feature detection, starting with the ORB
What we have said before is direct corner detection, and corner is essentially a feature point of an image. In addition to these algorithms that directly obtain corner points, we also need many other feature point detection algorithms that can be directly used, and these feature point detection algorithms are also inseparable from various basic operators. Take the ORB algorithm.
RB feature is a combination of FAST feature point detection method and BRIEF feature descriptor, which is improved and optimized on their original basis.
Firstly, it uses the FAST feature point detection method to detect feature points, and then uses Harris corner point measurement method to pick out N feature points with maximum Harris corner response value from FAST feature points.
In addition to the above description, ORB algorithm itself also involves a lot of content, such as rotation invariance (FAST feature itself does not have), multi-scale invariance, non-maximum suppression, etc. It requires a lot of ink to understand the feature extraction algorithm similar to ORB algorithm in detail, so we will talk about it in the next chapter. However, using OpenCV is quite simple:
img = cv2.imread('/home/fonttian/Data/image/OpenCV/blox.jpg')
orb = cv2.ORB_create()
kp = orb.detect(img, None)
kp, des = orb.compute(img, kp)
img = cv2.drawKeypoints(img, kp, img, color=(0.255.0), flags=0)
cv2.imshow('p', img)
Copy the code
The first line of code above reads the image, the second line creates the ORB, then uses Detect for feature detection, compute for feature calculation, and finally draw. Des is not actually used here, and we’ll talk about how to use it later. But only from the code analysis, we can clearly see that only one line of code is needed to create, detect, and draw, and draw this step, we even need to call OpenCV encapsulated classes, can be said to be very simple. The effect is as follows. We can also modify the specific drawing effect by parameters:
Similar to ORB, there are many other useful feature detectors in OPECV that are also easy to use. We’ll cover that in the next article.
Write in the last
The articles
- Starting with noise removal, what kind of PHOTOSHOP do you need with OpenCV?
- The cornerstone of computer vision – filtering
- Discover your body shape – OpenCV image contours
- Image edge detection, detection or simplification
download
Fongtian’s Github project — Rambling on computer Vision