Make writing a habit together! This is the fourth day of my participation in the “Gold Digging Day New Plan · April More text Challenge”. Click here for more details

Welcome to my GitHub

Here classification and summary of xinchen all original (including supporting source code) : github.com/zq2599/blog…

About Face Recognition

  • This article is “JavaCV face recognition trilogy” the first article, in the “JavaCV camera actual combat eight: Face detection “article, to achieve the detection of face and box marked out, but only the box out of the face effect is not big, it is best to identify the identity of the person, so that you can do a lot of things, such as check-in, alarm, this is the next we want to challenge face recognition
  • Face recognition involves two steps: training and recognition, followed by a brief explanation
  • First look at what is training, as shown in the picture below, using six photos of the two Kings to train, a total of two categories, after the training is completed, the model file facerecognizer.xml is obtained:

  • After the training is successful, we take a new photo to the model for recognition, and the result is the category at the time of training. After such recognition, we have determined the identity of the new photo:

  • Training and identification are described in more detail using flow charts below:

JavaCV Face Recognition Trilogy

“JavaCV face recognition trilogy” a total of three articles, content is as follows:

  1. This article introduces how to use JavaCV to detect every face in the camera, and save every face as a picture, this method can let us quickly get a large number of face photos for training
  2. Training: How to use classified photos to train models
  3. “Recognition and Preview” : Get the trained model to recognize the face of each frame in the video and label the result on the picture for preview
  • The entire trilogy is part of the JavaCV Camera Series, which is part 9, part 10, and part 11

This paper gives an overview of

  • What this post is going to do is get the training photos ready
  • You may be wondering: Why don’t I just find some photos myself? Go to the Internet search, to the album search, to take pictures can not it? That’s right, just find the face you want to recognize, but here’s another way: Detect a face with a camera, save a face-size photo on a hard drive, and use those photos to train. The model at the training site does better at detecting new photos
  • Specific practices are as follows:
  1. Write a program to do face detection camera photo, each face was detected, are kept for a picture, pay attention to is not a full picture of the camera video frames, but detect each face, the face of the rectangular as photos, and save the grayscale images, not color images (training and testing only need grayscale images)
  2. Then find a private place to run the program, a person in front of the camera, start…… Use all kinds of light and shade, angles, and expressions to save as pictures
  • Models trained with these images are better at the end because they cover a wide range of brightness, angles and expressions
  • So let’s write this program

Download the source code

  • The full source code for JavaCV Camera In Action is available on GitHub at github.com/zq2599/blog…
The name of the link note
Project home page Github.com/zq2599/blog… The project’s home page on GitHub
Git repository address (HTTPS) Github.com/zq2599/blog… The project source warehouse address, HTTPS protocol
Git repository address (SSH) [email protected]:zq2599/blog_demos.git The project source warehouse address, SSH protocol
  • The Git project has multiple folders. The source code for this project is in the Javacv-tutorials folder, as shown in the red box below:

  • Javacv-tutorials have many subprojects. The Code for the JavacV Camera Hands-on series was based on the Simply-grab-push project:

Code: Detection service

  • Java, init, convert, releaseOutputResource. Init is used to initialize the detection service. Convert processes a single frame. ReleaseOutputResource is executed at the end of the frame to release the resource. There is also a static method to buildGrayImage, which simply generates the corresponding Mat object of the grayscale image:
public interface DetectService {

    /** * Construct MAT of the same size according to the passed MAT, store grayscale images for future detection *@paramSRC The MAT object of the original image *@returnMAT object of the same size grayscale image */
    static Mat buildGrayImage(Mat src) {
        return new Mat(src.rows(), src.cols(), CV_8UC1);
    }
    
    /** * Initializes operations, such as model download *@throws Exception
     */
    void init(a) throws Exception;

    /** * get the original frame, do the identification, add box select *@param frame
     * @return* /
    Frame convert(Frame frame);

    /** * Release resources */
    void releaseOutputResource(a);
}
Copy the code
  • Then is the implementation class DetectService DetectAndSaveService. Java, the complete code is as follows, there are several caveats mentioned later:
@Slf4j
public class DetectAndSaveService implements DetectService {

    /** * The object of each frame of the original image */
    private Mat grabbedImage = null;

    /** * Grayscale image object corresponding to the original image */
    private Mat grayImage = null;

    /** * classifiers */
    private CascadeClassifier classifier;

    /** * converters */
    private OpenCVFrameConverter.ToMat converter = new OpenCVFrameConverter.ToMat();

    /** ** model file download address */
    private String modelFileUrl;

    /** ** ** */
    private String basePath;

    /** * record the total number of images */
    private final AtomicInteger num = new AtomicInteger();

    /** * the size of the training image */
    Size size = new Size(Constants.RESIZE_WIDTH, Constants.RESIZE_HEIGHT);

    public DetectAndSaveService(String modelFileUrl, String basePath) {
        this.modelFileUrl = modelFileUrl;
        
        // The image is saved on the hard disk. Note that the fixed prefix of the file name is the current year, month, day, hour, minute, second
        this.basePath = basePath
                      + new SimpleDateFormat("yyyyMMddHHmmss").format(new Date())
                      + "-";
    }

    /** * Audio sample object initialization *@throws Exception
     */
    @Override
    public void init(a) throws Exception {
        // Download the model file
        URL url = new URL(modelFileUrl);

        File file = Loader.cacheResource(url);

        // The full address of the model file after downloading
        String classifierName = file.getAbsolutePath();

        // Instantiate the classifier according to the model file
        classifier = new CascadeClassifier(classifierName);

        if (classifier == null) {
            log.error("Error loading classifier file [{}]", classifierName);
            System.exit(1); }}@Override
    public Frame convert(Frame frame) {
        // Switch from frame to Mat
        grabbedImage = converter.convert(frame);

        // Grayscale Mat, used for detection
        if (null==grayImage) {
            grayImage = DetectService.buildGrayImage(grabbedImage);
        }

        String filePath = basePath + num.incrementAndGet();

        // Face recognition, according to the results to do processing to get the preview window display frame
        return detectAndSave(classifier, converter, frame, grabbedImage, grayImage, filePath , size);
    }

    /** * before the end of the program, release the face recognition resources */
    @Override
    public void releaseOutputResource(a) {
        if (null! =grabbedImage) { grabbedImage.release(); }if (null! =grayImage) { grayImage.release(); }if (null==classifier) { classifier.close(); }}static Frame detectAndSave(CascadeClassifier classifier, OpenCVFrameConverter.ToMat converter, Frame rawFrame, Mat grabbedImage, Mat grayImage, String basePath, Size size) {

        // The current image is converted to grayscale image
        cvtColor(grabbedImage, grayImage, CV_BGR2GRAY);

        // The container to store the test results
        RectVector objects = new RectVector();

        // Start the test
        classifier.detectMultiScale(grayImage, objects);

        // Total number of test results
        long total = objects.size();

        // Return early if no result is detected
        if (total<1) {
            return rawFrame;
        }

        // If the detection result is greater than 1, there is obviously a problem with the detection
        if (total>1) {
            return rawFrame;
        }

        Mat faceMat;

        // If there is a test result, construct a rectangle based on the result data and draw it on the original image
        // The previous judgment ensures that there is only one face at this point
        Rect r = objects.get(0);

        // Get a small rectangle Mat object from the full gray scale
        faceMat = new Mat(grayImage, r);

        // The image used for training is fixed in size, so it needs to be resized
        resize(faceMat, faceMat, size);

        // Where to save the image
        String imagePath = basePath + "." + Constants.IMG_TYPE;

        // Save the image to hard disk
        imwrite(imagePath, faceMat);

        // The location of the face
        int x = r.x(), y = r.y(), w = r.width(), h = r.height();

        // Draw rectangles on people's faces
        rectangle(grabbedImage, new Point(x, y), new Point(x + w, y + h), Scalar.RED, 1, CV_AA, 0);

        // Release the detection result resource
        objects.close();

        // Convert the labeled image into a frame, and return
        returnconverter.convert(grabbedImage); }}Copy the code
  • There are a few things to note about this code:
  1. In the method of detectAndSave, if the number of faces detected in the current photo is greater than 1, it will be returned in advance without processing. This is because it is assumed that when running the program, there is only one person in front of the camera, so if more than one face is detected, it will be considered that the detection of the current photo is not accurate. The current photo is no longer processed (in practice, detection errors are often found, such as the detection of a rectangular box as a face). This early return logic can be adjusted according to your environment
  2. The imwrite method saves Mat to hard disk as an image
  3. Before saving the file to disk, the resize method is called to adjust the image to a size of 164*164, because this size is used for subsequent training and testing
  • Now that the core code has been written, you need to write some more code to use DetectAndSaveService

Code: Run the framework

  • “JavaCV camera one of actual combat: basis” to create a simple – grab – a push in the project is ready to parent AbstractCameraApplication, so we continue to use the project, create subclasses implement the abstract methods
  • Before coding, we should review the basic structure of the parent class, as shown in the figure below. The bold is the method defined by the parent class, and the red block is the method that needs to be subclassed to implement the abstract method. Therefore, we should implement the three red methods with the goal of local window preview:

  • New file PreviewCameraWithDetectAndSave Java, this is a subclass of AbstractCameraApplication, its code is very simple, the next order, according to the above instructions
  • We define a member variable of type CanvasFrame, previewCanvas, which is the local window to display video frames:
protected CanvasFrame previewCanvas
Copy the code
  • DetectService = DetectService (DetectService);
    /** * detection tool interface */
    private DetectService detectService;
Copy the code
  • PreviewCameraWithDetectAndSave constructor, accept DetectService instances:
    /** * Different detection tools can be passed in through the constructor@param detectService
     */
    public PreviewCameraWithDetectAndSave(DetectService detectService) {
        this.detectService = detectService;
    }
Copy the code
  • The initialization operation is followed by the instantiation and parameter setting of the previewCanvas:
    @Override
    protected void initOutput(a) throws Exception {
        previewCanvas = new CanvasFrame("Camera preview, face detection and save to hard drive.", CanvasFrame.getDefaultGamma() / grabber.getGamma());
        previewCanvas.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        previewCanvas.setAlwaysOnTop(true);

        // Check the initialization of the service
        detectService.init();
    }
Copy the code
  • Next comes the Output method, which defines what to do when you get each frame of video data. Detectservice. convert is called to detect the face, save the image, and display it in the local window:
    @Override
    protected void output(Frame frame) {
        // The original frame is first handed to the detection service, which involves object detection, and then the detection result is marked on the original image.
        // Then convert to frame return
        Frame detectedFrame = detectService.convert(frame);
        // The frame displayed in the preview window is the frame marked with the detection result
        previewCanvas.showImage(detectedFrame);
    }
Copy the code
  • Finally, after the end of the loop processing video, the program exits before doing things, first close the local window, and then release the detection service resources:
    @Override
    protected void releaseOutputResource(a) {
        if (null! = previewCanvas) { previewCanvas.dispose(); }// Instrumentation also frees resources
        detectService.releaseOutputResource();
    }
Copy the code
  • Because detection is a bit time-consuming, the interval between two frames is lower than normal preview:
    @Override
    protected int getInterval(a) {
        return super.getInterval()/8;
    }
Copy the code
  • So far, the function has been developed, and then write the main method. When instantiating the DetectAndSaveService, notice that there are two parameters. The first one is the download address of the face detection model, and the second one is the local location where the face photos are saved. There is also the action method argument 1000 which indicates that the preview duration is 1000 seconds:
    public static void main(String[] args) {
        String modelFileUrl = "https://raw.github.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_alt.xml";
        new PreviewCameraWithDetectAndSave(
                new DetectAndSaveService(
                        modelFileUrl, 
                        "E:\\temp\\202112\\18\\001\\man"))
                .action(1000);
    }
Copy the code

Grab a picture of the first person

  • Run the main method, and then ask the extras A stage, look at him A person in front of the camera, start…… Use all the lights and shades, all the angles, all the expressions.
  • As the preview window is opened, you can see the effect shot by the camera. The rectangles with red frames will be saved as pictures eventually. Please adjust the Angle and expression.

  • The test can be completed when a certain number of pictures are detected. I have saved 259 pictures here, as shown below:

  • For the above photos, the suggestion is to check all the photos with the naked eye and delete all the photos that are not human faces. I found more than ten photos that are not human faces. For example, the following one identified part of the face as a human face, which is obviously problematic.

  • All the above photos are saved under E: temp\202112\18\001\man

Grab a photo of the second person

  • Change the code to E:\temp\202112\18\001\woman in the main method, then run the program again, invite the extras B to appear, imploring her to be like the previous extras a person in front of the camera, start…… Use all kinds of light and shade, angles, and expressions
  • So, we successfully got a large number of face pictures of the second extras, remember to observe each picture with the naked eye, to delete inaccurate
  • So far, with the help of the program written above, we have easily obtained A large number of face photos of the two extras. Among them, the photo of A is saved in E:\temp\202112\18\001\man and the photo of B is saved in E:\temp\202112\18\001\woman:

  • The task of this part is now complete. In the next part, we will use these photos for training in preparation for the final identification.

Welcome to the Nuggets: programmer Chen Chen

Learning on the road, you are not alone, Xinchen original all the way accompanied by…