SURF in Python, OpenCV (speed up robust features)
The last blog introduced SIFT for critical point detection and description, but it is relatively slow. This blog will introduce an optimized version of IT — SURF (Speeded Up Robust Features), which is an accelerated VERSION of SIFT;
- In SIFT, Lowe uses gaussian difference approximation to Gaussian’s Laplacian operator to find scale Spaces.
- SURF makes it easy to compute the convolution of a box filter with the help of an integral image. Different scales are computed in parallel. In addition, SURF relies on the determinant of the Hessian matrix for size and location.
- SURF adds many features to speed up each step. It is three times faster than SIFT and has comparable performance.
- SURF is good at dealing with blurred and rotating images, but not with viewpoint changes and lighting changes.
1. The rendering
SURF default parameters:
Check to the key points more, the original picture covered. SURF adjusts the threshold, and nearly 50 key points are detected. The effect picture is as follows: U-surf does not calculate the direction, but all directions are one. SURF key points effect diagram is as follows: By default, 64D is used to calculate the effect diagram as above, and adjusted to 128D. The U-SURF effect diagram is as follows: basically no difference
Principle 2.
OpenCV uses surf.detect (), SURF.compute(), and so on to find key points and descriptors.
2.1 Why is SURF faster than SIFT?
- At any scale, the wavelet response can be easily found using integral images.
- No rotation, no deformation, so not having to find the direction speeds up the process.
- SURF offers a feature called Upright SURF or U-surf. It increases speed and is very robust between +-15°.
- OpenCV supports Upright-SURF, U-SURF, depending on the logo. If 0, the direction is computed. If it is 1, the direction is not calculated and the speed is faster.
- The lower the dimension, the faster the calculation and matching, but the more significant the feature. OpenCV supports both functions by setting flag values extended to 0 and 1 for 64 dim and 128 DIM respectively (the default is 128 DIM)
- Another important improvement is the use of Laplace notation (locus of Hessian matrix) for potential points of interest. It does not add to the cost of computation. Laplacian notation distinguishes bright spots on a dark background from the opposite.
- In the matching phase, only features with the same contrast are compared (as shown in the figure below). This minimal information allows for faster matching without degrading descriptor performance.
2.3 How to Obtain SIFT and SURF?
-
Stackoverflow.com/questions/6…
-
www.pianshen.com/article/828…
SIFT and SURF are patented algorithms that have been removed above OpenCV3.0. Moved to opencv-contrib-Python. So there are the following methods to obtain: Opencv3 contrib-python install opencv3 contrib-python install opencv3 contrib-python install opencv3 contrib-python install opencv contrib-python install opencv contrib-python install opencv contrib-python install opencv contrib-python install opencv contrib-python install opencv contrib-python
To sum up, method 2 is more convenient and feasible, and only requires two steps:
- Uninstall opencv – python
- Install the specified version opencv-contrib-python==3.4.2.17
PIP uninstall opencv - python PIP install opencv -i https://pypi.tuna.tsinghua.edu.cn/simple - contrib - python = = 3.4.2.17Copy the code
You can see it worked
3. The source code
# Find SURF key points and descriptors and draw
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('images/butterfly.jpg'.0)
Create SURF object. You can specify parameters at creation time or set parameters later
# Set the Hessian threshold to 400
sift = cv2.xfeatures2d.SIFT_create()
print('sift: ', sift)
surf = cv2.xfeatures2d.SURF_create()
print('surf: ', surf,
' \ndefaultParameter\thessianThreshold: ', surf.getHessianThreshold(),
' upright: ', surf.getUpright(),
' extended: ', surf.getExtended(),
' descriptors: ',surf.descriptorSize())
# Search for SURF key points and descriptors
# KP: list of returned key points, des: numpy array
kp, des = surf.detectAndCompute(img, None)
Draw key points on the picture
img2 = cv2.drawKeypoints(img, kp, None, (255.0.0), 4)
plt.imshow(img2), plt.xticks([]), plt.yticks([]), plt.title('more keypoints'), plt.show()
print('keypoints: '.len(kp))
Check the current Hessian threshold
# print(surf.getHessianThreshold())
Set the Hessian threshold to 50000, but 300~500 is generally preferred
surf.setHessianThreshold(50000)
print(' parameters\thessianThreshold: ', surf.getHessianThreshold(),
' upright: ', surf.getUpright(),
' extended: ', surf.getExtended(),
' descriptors: ',surf.descriptorSize())
# Evaluate key points and descriptors again
kp, des = surf.detectAndCompute(img, None)
print('keypoints: '.len(kp))
Draw key points on the picture
img2 = cv2.drawKeypoints(img, kp, None, (255.0.0), 4)
plt.imshow(img2), plt.xticks([]), plt.yticks([]), plt.title('less than 50 keypoints'), plt.show()
# U-surf does not compute directions
# print(surf.getUpright())
surf.setUpright(True)
print(' parameters\thessianThreshold: ', surf.getHessianThreshold(),
' upright: ', surf.getUpright(),
' extended: ', surf.getExtended(),
' descriptors: ',surf.descriptorSize())
# recalculate key points and descriptors and draw
kp = surf.detect(img, None)
print('keypoints: '.len(kp))
img2 = cv2.drawKeypoints(img, kp, None, (255.0.0), 4)
plt.imshow(img2), plt.xticks([]), plt.yticks([]), plt.title('U-SURF'), plt.show()
# All directions are displayed in the same direction, it is much faster than before. U-surf is better if you are dealing with situations where direction is not an issue (such as panoramic stitching).
Find the size of the descriptor
# print(surf.descriptorSize())
# Extended is false and defaults to 64D
# print(surf.getExtended())
Set the descriptor to 128D
surf.setExtended(True)
print(' parameters\thessianThreshold: ', surf.getHessianThreshold(),
' upright: ', surf.getUpright(),
' extended: ', surf.getExtended(),
' descriptors: ',surf.descriptorSize())
kp, des = surf.detectAndCompute(img, None)
print('keypoints: '.len(kp))
img2 = cv2.drawKeypoints(img, kp, None, (255.0.0), 4)
plt.imshow(img2), plt.xticks([]), plt.yticks([]), plt.title('128D res'), plt.show()
Copy the code