In the past few years, face recognition has received wide attention and is regarded as one of the most promising applications in the field of image analysis. Face detection can consider a large part of the face recognition operation. Computing resources are focused on the part of the image that holds the face based on its intensity. Face detection in images is complex because of variability such as posture, expression, position and orientation, skin color, presence of glasses or facial hair, differences in camera gain, lighting conditions and image resolution.

Object detection is a kind of computer technology, it is related to image processing and computer vision, and it interacts with human faces, buildings, trees, cars and other objects detection instances. The main purpose of face detection algorithm is to determine whether there are any faces in the image.

In recent years, a lot of research work has been proposed in the field of face recognition and face detection to make it more advanced and accurate, but when Viola-Jones introduced the real-time face Detector, it revolutionized the field by being able to detect faces in real time and with high accuracy.

Face detection is the first and essential step of face recognition, which is used to detect faces in images. It is part of object detection and can be used in many fields, such as security, biometrics, law enforcement, entertainment, personal safety, etc.

It is used to detect faces in real time to monitor and track people or objects. It is widely used in cameras to identify multiple looks in ex-Mobile cameras and DSLR frameworks. Facebook is also using face detection algorithms to detect and recognize faces in images.

This article is intended for beginners in the field of computer vision or AI who want to understand what face detection is, its types and how it works.

Face detection method

Yan, Kriegman, and Ahuja propose a classification of face detection methods. These methods fall into four categories, and face detection algorithms can belong to two or more groups. These categories are as follows:

1. Based on knowledge

Knowledge-based approaches rely on rule sets to detect faces based on human knowledge. A face must have a nose, eyes and mouth at a certain distance and position from each other. A big problem with these approaches is the difficulty of establishing a proper set of rules. If the rules are too general or too detailed, there can be a lot of false positives. This method alone is not enough to find many faces in multiple images.

2. Based on features

The feature-based approach is to locate a face by extracting its structural features. It was first trained as a classifier and then used to distinguish between facial and non-facial areas. The idea is to overcome the limitations of our instinctive knowledge of faces. The method is broken down into several steps and even involves photos of many faces, and they reported a 94% success rate.

3. The template is matched

The template matching method uses predefined or parameterized face templates to locate or detect faces through the correlation between templates and input images. For example, the human face can be divided into eyes, facial outline, nose and mouth. In addition, face models can be constructed from edges only by using edge detection methods. This method is simple to implement, but insufficient for face detection. However, deformable templates have been proposed to deal with these problems.

4. Look based

The appearance-based approach relies on a set of delegated trained face images to find the face model. The appearance-based approach is superior to other performance approaches. In general, appearance-based approaches rely on statistical analysis and machine learning techniques to find relevant features in facial images. This method is also used for feature extraction in face recognition.

The appearance-based model is further divided into sub-methods for face detection, as shown below:

  • Face based feature

A feature-face based algorithm for face recognition, which is a method of efficiently representing faces using principal component analysis.

  • Based on the distribution

Algorithms such as PCA and Fisher discriminant can be used to define subspaces representing facial patterns. There is a classifier trained to correctly identify instances of the target pattern class from the background image pattern.

  • The neural network

Neural network has successfully solved many detection problems, such as object detection, face detection, emotion detection and face recognition.

  • Support vector machine

Support vector machines are linear classifiers that maximize the margin between the decision hyperplane and the examples in the training set. Firstly, the classifier is applied to face detection.

  • Winnows sparse network

They define a sparse network consisting of two linear units or target nodes. One represents a face pattern and the other represents a non-face pattern. It takes less time and is more efficient.

  • Naive Bayes Classifiers

The probability of a face appearing in an image is calculated by calculating the frequency of a series of patterns on the training image. The classifier captures combined statistics of the local appearance and position of the face.

  • Hidden Markov Model (HMM)

The state of the model is a facial feature, often described as a pixel bar. HMM is often used in conjunction with other methods to build detection algorithms.

  • Information Theoretical Approach

Markov Random field (MRF) can be used for facial patterns and related features. The Markov process uses the kullback-Leibler divergence maximization distinction between classes. Therefore, this method can be used for face detection.

  • Inductive Learning

This method has been used to detect faces. Algorithms such as Quinlan’s C4.5 or Mitchell’s find-S are used for this purpose.

How face detection works

There are many techniques for detecting human faces. With these techniques, we can recognize human faces more accurately. These technologies have almost the same face detection programs, such as OpenCV, neural networks, Matlab, etc. Face detection works by detecting multiple faces in an image. Here we use OpenCV for face detection, and there are some steps of face detection operation, as shown below:

1) First, import the image by providing its location. The image is then converted from RGB to grayscale, where it is easy to detect faces.

2) After that, image processing is used, in which the image is resized, cropped, blurred and sharpened as needed.

3) The next step is image segmentation, which is used for contour detection or segmentation of multiple objects in a single image so that the classifier can quickly detect objects and faces in the picture.

4) The next step is to use the Haar-like feature algorithm proposed by Voila and Jones for face detection. The algorithm is used to find the position of a face in a frame or image. All faces share some common features of human faces, such as the eye region being darker than its neighbors and the nose region being brighter than the eyes.

Haar-like algorithm is also used for feature selection or feature extraction of objects in the image, and eyes, nose and mouth in the image are detected by edge detection, line detection and center detection. It is used to select basic features in images and extract these features for face detection.

5) The next step is to give the coordinates of X, Y, W and H, which forms a rectangular box in the picture to show the position of the face, or we can say the region of interest in the image. It can then create a rectangular box in the area of interest where faces are detected. There are many other detection techniques used together, such as smile detection, eye detection, blink detection, etc.

How to run face detector in real time (Based on Webcam)

Requirements for running code – Python, OpenCV, Webcam, Numpy.

#import libraries import cv2 import numpy as np #import classifier for face and eye detection face_classifier = Cv2. CascadeClassifier(' Haarcascades/ HaarCascade_frontalface_default. XML ') # Import Classifier for Face and Eye Detection Face_classifier = Cv2. CascadeClassifier(' Haarcascades/ HaarCascade_frontalface_default.xml ') eye_classifier = Cv2 CascadeClassifier (' Haarcascades/ Haarcascade_eye.xml ') def face_Detector (img, size=0.5): cv2.CascadeClassifier (' Haarcascades/ Haarcascade_eye.xml ') def face_Detector (img, size=0.5): # Convert Image to Grayscale gray = cv2.cvtColor (img, Cv2. COLOR_BGR2GRAY) faces = face_classifier. DetectMultiScale (gray, 1.3, 5) If faces is () : return img # Given coordinates to detect face and eyes location from ROI for (x, y, w, Rectangle (x+w, y); rectangle (x+w, y+h); rectangle (x+w, y+h); 2) roi_gray = gray[y: y+h, x: x+w] roi_color = img[y: y+h, x: x+w] eyes = eye_classifier.detectMultiScale (roi_gray) for (ex, ey, ew, eh) in eyes: Rectangle (roi_color,(ex,ey),(ex+ew,ey+eh),(0,0,255),2) roi_color = rov2. flip (roi_color, 1) return roi_color # Webcam setup for Face Detection cap = cv2.VideoCapture (0) while True: Ret, frame = cap.read () cv2.imshow (' Our Face Extractor ', face_detector (frame)) if cv2.waitKey (1) == 13: #13 is the Enter Key break # When everything done, release the capture cap.release () cv2.destroyAllWindows ()Copy the code

Face detection can be used for biometric recognition, usually as part of a face recognition system. Face detection can also be used for video surveillance, human-computer interaction and image database management. Some of the latest digital cameras use face detection to autofocus.

Face recognition technology is a key technology in the field of artificial intelligence and has a wide application prospect in intelligent video surveillance scenes.

In security monitoring, face recognition is also significant, such as public places (subway stations, stations, streets, hotels, etc.) security control, the Ministry of Public Security chase suspects. Based on security surveillance cameras in public places, it can capture faces and upload the results to the network of the Ministry of Public Security for comparison with the faces of suspects to assist the law enforcement work of public security personnel. TSINGSEE Video will also continue to develop intelligent business systems and platforms in multiple scenarios, focusing on AI intelligent detection and recognition technology. For example, EasyCVR is capable of AI face detection, face recognition, license plate recognition, voice intercom, PTC control, audible and visual alarm, surveillance video analysis and data summary. Many projects have been implemented, such as pedestrian detection, passenger flow analysis and other scenes in scenic spots.