This is the 25th day of my participation in Gwen Challenge

The accuracy of Harris corner detection and Shi-Timasi corner detection introduced above is pixel level. However, in tracking, 3D reconstruction, camera correction and other applications, we need accurate corner position coordinates, that is, sub-pixel level.

The principle of

OpenCV subpixel corner detection method is derived from the observation that the vector from subpixel corner point to surrounding pixel point should be perpendicular to the gray gradient of the image. The coordinate value of subpixel accuracy is obtained by iterative method of minimizing the error function.

In the figure:

  • QQQ, that is, the sub-pixel point to be desired;
  • Pip_ipi, the dot around QQQ
  • (PI −q)(p_i-q)(PI −q), the first vector
  • The gradient at PIp_IPi GiG_iGi, the second vector

As shown above:

  • P0p_0p0 is located in the white areas, G0 = 0 g_0 = 0 G0 = 0, so the G0 ∗ (p0 – q) = 0 g_0 * (p_0 – q) = 0 G0 ∗ (p0) – q = 0
  • P1p_1p1 located at the edge of the area, G0 indicates zero g_0 \ neq0G0  = 0, but the gradient direction and (p0) – q (p_0 – q) (p0 – q) vertical, so the G1 ∗ (p1) – q = 0 g_1 * (p_1 – q) = 0 G1 ∗ (p1) – q = 0

So, to find the pixels to meet conditions: Gi ∗ (PI) – q = 0 g_i * (p_i – q) = 0 Gi ∗ (PI) – q = 0

Then use the least square method to solve:


G i q = G i p i G_i*q =G_i*p_i

G i T G i q = G i T G i p i G_i^TG_iq=G_i^TG_ip_i

q = i N ( G i T G i ) 1 ( G i T G i p i ) q = \sum_i^N(G_i^TG_i)^{-1}*(G_i^TG_ip_i)

GiG_iGi and PIp_ipi:

The coordinates of corner points detected by Harris or Shi-Tomasi algorithm are integers, set as Q0q_0q0, with q0Q_0q0 as the center, set a window, points in the window form PIP_IPi, for PIP_IPI, its gradient, gradient transpose and their product are respectively:


G i = [ d x d y ] G_i = \begin{bmatrix} dx & dy \end{bmatrix}

G i T = [ d x d y ] G_i^T = \begin{bmatrix} dx \\ dy \end{bmatrix}

G i T G i = [ d x d y   d x d y ] G_i^T*G_i= \begin{bmatrix} dx & dy \\\ dx & dy \end{bmatrix}

According to the above formula, we can solve a sub-pixel point QQQ every time, so that the point is the center, we can continue to solve the sub-pixel point, and keep iterating. When does the computation stop? There are two ways:

  • Specify the number of iterations
  • Specify result precision

API

public static void cornerSubPix(Mat image, Mat corners, Size winSize, Size zeroZone, TermCriteria criteria) 
Copy the code
  • Parameter 1: image: the input image must be a single channel image of CV_8U or CV_32F type.
  • Parameter 2: Corners: coordinates of both input and output corners. Pixel level corners are passed in and sub-pixel level corners are passed out.
  • Parameter 3: winSize, half the size of the search window. WinSize =Size(5, 5); winSize=Size(5 *2+1, 5*2+1) =Size(11, 11);
  • Parameter 4: zeroZone, half of the size of the “blind zone” in the middle of the search area, does not participate in the calculation when sweeping this area to avoid possible singular values in the autocorrelation matrix. If the parameter is (-1, -1), the blind zone is not configured.
  • Parameter 5: Criteria, the conditions for terminating corner optimization iterations.

TermCriteria template class

TermCriteria represents the termination conditions of iterative algorithms. There are three variables in the class: the first is the termination condition type, the second parameter is the maximum number of iterations, and the last parameter is a specific threshold.

public int type;            // Termination condition type
public int maxCount;        // Maximum number of iterations
public double epsilon;      // A specific threshold
Copy the code

For the first parameter termination condition type, there are only three types to choose from:

public static final int COUNT = 1;           // The iteration termination condition is to reach the maximum number of iterations

public static final int MAX_ITER = COUNT;    // The iteration ends at the threshold

public static final int EPS = 2;             // Both are used as iteration termination conditions
Copy the code

operation

private fun doCornerSubPix(a) {
    val gray = bgr.toGray()
    val corners = MatOfPoint()
    val maxCorners = 100
    val qualityLevel = 0.01
    val minDistance = 10.0

    Imgproc.goodFeaturesToTrack(
        gray,
        corners,
        maxCorners,
        qualityLevel,
        minDistance,
        Mat(),
        3.false.0.04
    )

    Log.v(App.TAG, "Number of corners detected: ${corners.rows()}")

    val cornersData = IntArray((corners.total() * corners.channels()).toInt())
    corners.get(0.0, cornersData)

    for (i in 0 until corners.rows()) {
        Log.v(App.TAG,
            "Corner [" + i + "] = (" + cornersData[i * 2] + "," + cornersData[i * 2 + 1] + ")")}val matCorners = Mat(corners.rows(), 2, CV_32F)
    val matCornersData = FloatArray((matCorners.total() * matCorners.channels()).toInt())
    matCorners.get(0.0, matCornersData);
    for (i in 0 until corners.rows()) {
        Imgproc.circle(
            rgb, Point(
                cornersData[i * 2].toDouble(),
                cornersData[i * 2 + 1].toDouble()
            ), 4,
            Scalar(0.0.255.0.0.0), Imgproc.FILLED
        )
        matCornersData[i * 2] = cornersData[i * 2].toFloat()
        matCornersData[i * 2 + 1] = cornersData[i * 2 + 1].toFloat()
    }

    GlobalScope.launch(Dispatchers.Main) {
        mBinding.ivResult.showMat(rgb)
    }
    matCorners.put(0.0, matCornersData);

    val winSize = Size(5.0.5.0)
    val zeroSize = Size(-1.0, -1.0)
    val criteria = TermCriteria(TermCriteria.EPS + TermCriteria.MAX_ITER, 40.0.01)
    Imgproc.cornerSubPix(gray, matCorners, winSize, zeroSize, criteria)
    matCorners.get(0.0, matCornersData);

    for (i in 0 until corners.rows()) {
        Log.v(App.TAG,
            "Corner SubPix [" + i + "] = (" + matCornersData[i * 2] + "," + matCornersData[i * 2 + 1] + ")")}}Copy the code

The effect

The source code

Github.com/onlyloveyd/…