Recently in the image matching thing, found that there is a called OpenCV library, very powerful, cross-platform, multi-language interface, in computer vision and image processing to provide a number of general algorithms, application fields including object recognition, face recognition, image segmentation, machine vision, motion analysis. Because it involves some image processing concepts and algorithms, for perennial business system programmers rarely touch this field, so share the processing process of the problem.

The installation of OpenCV

It is cross-platform and supported by mainstream operating systems. Take the operating system OS X and development tool IntelliJ IDEA as an example to look at the installation and configuration process of this library.

The latest version of OpenCV is 3.2.0, so look at the installation instructions of this version. There are installation instructions for Linux, Windows, Android, iOS and other platforms on the page. Since I use Java, The installation process is divided into the following steps:

  1. Select a directory and use git to download the OpenCV source code
git clone git://github.com/opencv/opencv.git
Copy the code
  1. Go to the opencV directory and switch branches
cdOpencv git checkout 2.4Copy the code
  1. Create a build directory. Create a build directory in the OpencV directory to store compiled files
mkdir build
cd build
Copy the code
  1. Compile the entire project code
cmake -DBUILD_SHARED_LIBS=OFF ..
Copy the code

When the console output contains “Java” in the line “To be built”, the project is successfully compiled.

make -j8
Copy the code

After executing the make command, check the contents of the bin directory in the build directory. If there is a file named opencv-2413.jar, opencV has been compiled and installed.

After the above steps, you actually have the ability to access openCV through the Java interface. To access OpenCV in IntelliJ IDEA, two additional configurations are required:

  1. Add opencV-2413.jar in the bin directory after the build above to the project Libraries

    Then select the location of OpencV-2413.jar.

  2. Write a class that tests the OpenCV environment

package org.study.image.openCV; import org.opencv.core.Core; public class OpenCVTest { public static void main(String[] args) { System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }}Copy the code

This class is just a sentence, load the local openCV library.

  1. Configure Java Application running parameters

UnsatisfiedLinkError [no opencv_java2413 in java.library.path]

Problem description

The problem to solve is to determine whether one image is in another, such as the following two images:

The image below is visible to the naked eye in the lower left region of the image above, but how can a computer make this judgment instead of a human?

Pixel-based template matching

The first material to be found is the template matching in OpenCV, which works by going from one template image to another image to find similar parts of the template (the template is the small image above in the example above). Template matching algorithm is to slide on the image to be matched through the sliding window, and find the subgraph with the maximum similarity by comparing the similarity between template and subgraph.

A slide window is a slide that moves an image block one pixel at a time (from left to right and from top to bottom). At each location, a metric calculation is performed to determine how similar the pixel values of the image block are to those of a particular region of the original image. When the similarity is high enough, the target is considered to have been found. Obviously, the definition of “similarity degree” here depends on the results given by specific calculation formula, and the results of different algorithms are different.

There are currently six algorithms available in OpenCV: TM_SQDIFF, TM_SQDIFF_NORMED, TM_CCORR, TM_CCORR_NORMED, TM_CCOEFF, TM_CCOEFF_NOR, TM_CCOEFF, TM_CCOEFF_NOR MED (normalized correlation coefficient matching method).

Here is the code for template matching using OpenCV in Java:

package org.study.image.openCV;

import org.opencv.core.*;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;

public class OpenCVTest {
    public static void main(String[] args) {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
        Mat source, template; // Read the file into OpenCV's Mat formatsource = Highgui.imread("/ Users/niwei/Downloads/artwork. Jpeg." ");
        template = Highgui.imread("/ Users/niwei/Downloads/templates. Jpeg." "); // Create the same size as the original image, Mat result = mat.zeros (source.rows() -template.rows () + 1, source.cols() -template.cols () + 1, cvtype.cv_32fc1);  Imgproc.matchtemplate (source, template, result, Imgproc.TM_SQDIFF_NORMED); Normalize (result, result, 0, 1, Core.NORM_MINMAX, -1); // Get the most probable point, MinMaxLocResult is its data format, including the maximum and minimum point position x, y core.minMaxLocResult mlr = Core.minMaxLoc(result); Point matchLoc = mlr.minLoc; Core.rectangle(); // Rectangle (); // Rectangle ()source, matchLoc, new Point(matchLoc.x + template.width(), matchLoc.y + template.height()), new Scalar(0, 255, 0)); // Print the result to the corresponding location highgui.imwrite ("/Users/niwei/Downloads/ matching result.jpeg".source); }}Copy the code

The drawback is that if the template image is rotated or scaled, the template matching method through the sliding window will not work. What about that?

SURF matching based on feature points

In order to solve the problem of matching the original image with the template image after rotation and scaling, the feature transformation matching algorithm in computer vision processing algorithm is used. The idea is to find some “stable points” in the image that will not disappear due to changes in perspective, lighting or noise, such as corners, edges, bright spots in dark areas and dark spots in light areas. In this way, if the two images have the same scene, then the stable point will appear on the same scene in the two images at the same time, so that the matching can be achieved.

OpenCV has provided a lot of algorithms for feature point matching, including FAST, SIFT, SURF, ORB, etc., here do not repeat the difference between these algorithms, directly take SURF as an example to see how OpenCV is used.

package com.zhiqu.image.recognition;

import org.opencv.calib3d.Calib3d;
import org.opencv.core.*;
import org.opencv.features2d.*;
import org.opencv.highgui.Highgui;

import java.util.LinkedList;
import java.util.List;

/**
 * Created by niwei on 2017/4/28.
 */
public class ImageRecognition {

    private floatNndrRatio = 0.7 f; Private int matchesPointCount = 0; private int matchesPointCount = 0; publicfloat getNndrRatio() {
        return nndrRatio;
    }

    public void setNndrRatio(float nndrRatio) {
        this.nndrRatio = nndrRatio;
    }

    public int getMatchesPointCount() {
        return matchesPointCount;
    }

    public void setMatchesPointCount(int matchesPointCount) { this.matchesPointCount = matchesPointCount; } public void matchImage(Mat templateImage, Mat originalImage) { MatOfKeyPoint templateKeyPoints = new MatOfKeyPoint(); SURF FeatureDetector FeatureDetector = FeatureDetector. Create (FeatureDetector.SURF); FeatureDetector. Detect (templateImage, templateKeyPoints); MatOfKeyPoint templateDescriptors = new MatOfKeyPoint(); DescriptorExtractor descriptorExtractor = DescriptorExtractor.create(DescriptorExtractor.SURF); System.out.println("Extracting the feature points of the template graph"); descriptorExtractor.compute(templateImage, templateKeyPoints, templateDescriptors); Mat outputImage = new Mat(templateimage.rows (), templateimage.cols (), highgui.cv_load_image_color); System.out.println("Display the extracted feature points on the picture."); Features2d.drawKeypoints(templateImage, templateKeyPoints, outputImage, new Scalar(255, 0, 0), 0); OriginalKeyPoints = new MatOfKeyPoint(); MatOfKeyPoint originalDescriptors = new MatOfKeyPoint(); featureDetector.detect(originalImage, originalKeyPoints); System.out.println("Extracting the feature points of the original image");
        descriptorExtractor.compute(originalImage, originalKeyPoints, originalDescriptors);

        List<MatOfDMatch> matches = new LinkedList();
        DescriptorMatcher descriptorMatcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);
        System.out.println("Looking for the best match"); /** * the function of knnMatch method is to find the best matchin the given feature description set. * Use knn-matching algorithm, set K=2, then get two nearest descriptors for each match, and then calculate the ratio between the nearest distance and the sub-closest distance. When the ratio is greater than the set value, As the final match. */ descriptorMatcher.knnMatch(templateDescriptors, originalDescriptors, matches, 2); System.out.println("Compute a match"); LinkedList<DMatch> goodMatchesList = new LinkedList(); Matches. ForEach (match -> {DMatch[] dmatcharray = mate.toarray (); DMatch m1 = dmatcharray[0]; DMatch m2 = dmatcharray[1];if(m1.distance <= m2.distance * nndrRatio) { goodMatchesList.addLast(m1); }}); matchesPointCount = goodMatchesList.size(); // If the matched feature points are greater than or equal to 4, the template image is considered to be in the original image, and the value can be adjusted by itselfif (matchesPointCount >= 4) {
            System.out.println("Template image matches original image successfully!"); List<KeyPoint> templateKeyPointList = templateKeyPoints.toList(); List<KeyPoint> originalKeyPointList = originalKeyPoints.toList(); LinkedList<Point> objectPoints = new LinkedList(); LinkedList<Point> scenePoints = new LinkedList(); goodMatchesList.forEach(goodMatch -> { objectPoints.addLast(templateKeyPointList.get(goodMatch.queryIdx).pt); scenePoints.addLast(originalKeyPointList.get(goodMatch.trainIdx).pt); }); MatOfPoint2f objMatOfPoint2f = new MatOfPoint2f(); objMatOfPoint2f.fromList(objectPoints); MatOfPoint2f scnMatOfPoint2f = new MatOfPoint2f(); scnMatOfPoint2f.fromList(scenePoints); Mat Homography = Calib3d. FindHomography (objMatOfPoint2f, scnMatOfPoint2f, Calib3d.RANSAC, 3); /** * Perspective Transformation is a process of projecting an image to a new Viewing Plane, also known as Projective Mapping. */ Mat templateCorners = new Mat(4, 1, CvType.CV_32FC2); Mat templateTransformResult = new Mat(4, 1, CvType.CV_32FC2); templateCorners.put(0, 0, new double[]{0, 0}); templateCorners.put(1, 0, new double[]{templateImage.cols(), 0}); templateCorners.put(2, 0, new double[]{templateImage.cols(), templateImage.rows()}); templateCorners.put(3, 0, new double[]{0, templateImage.rows()}); / / use perspectiveTransform will figure perspective to correct image by using the standard template picture Core. The perspectiveTransform (templateCorners templateTransformResult, homography); / / rectangular four vertices double [] pointA = templateTransformResult. Get (0, 0); double[] pointB = templateTransformResult.get(1, 0); double[] pointC = templateTransformResult.get(2, 0); double[] pointD = templateTransformResult.get(3, 0); Int rowStart = (int) pointA[1]; int rowEnd = (int) pointC[1]; int colStart = (int) pointD[0]; int colEnd = (int) pointB[0]; Mat subMat = originalImage.submat(rowStart, rowEnd, colStart, colEnd); Highgui.imwrite("/ Users/niwei/Desktop/match figure of opencv/artwork. JPG", subMat); Core.line(originalImage, new Point(pointA), new Point(pointB), new Scalar(0, 255, 0), 4); // A->B core-line (originalImage, new Point(pointB), new Point(pointC), new Scalar(0, 255, 0), 4); // Right B->C core-line (originalImage, new Point(pointC), New Point(pointD), new Scalar(0, 255, 0), 4); // C->D core-line (originalImage, new Point(pointD), new Point(pointA), new Scalar(0, 255, 0), 4); Matches = new MatOfDMatch(); goodMatches.fromList(goodMatchesList); Mat matchOutput = new Mat(originalImage.rows() * 2, originalImage.cols() * 2, Highgui.CV_LOAD_IMAGE_COLOR); Features2d.drawMatches(templateImage, templateKeyPoints, originalImage, originalKeyPoints, goodMatches, matchOutput, new Scalar(0, 255, 0), new Scalar(255, 0, 0), new MatOfByte(), 2); Highgui.imwrite("/ Users/niwei/Desktop/opencv/feature point matching process. JPG", matchOutput);
            Highgui.imwrite("/ Users/niwei/Desktop/opencv/template in figure in the original position. JPG", originalImage);
        } else {
            System.out.println("Template drawing is not in the original drawing!");
        }

        Highgui.imwrite("/ Users/niwei/Desktop/opencv/template feature points. JPG", outputImage);
    }

    public static void main(String[] args) {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);

        String templateFilePath = "/ Users/niwei/Desktop/opencv/templates. Jpeg." ";
        String originalFilePath = "/ Users/niwei/Desktop/opencv/artwork. Jpeg." "; Mat templateImage = highgui.imread (templateFilePath, highgui.cv_load_image_color); Mat originalImage = Highgui.imread(originalFilePath, Highgui.CV_LOAD_IMAGE_COLOR); ImageRecognition imageRecognition = new ImageRecognition(); imageRecognition.matchImage(templateImage, originalImage); System.out.println("Total number of matched pixels:"+ imageRecognition.getMatchesPointCount()); }}Copy the code

The code is explained in the comments, and the execution result is as follows: