This article is for beginners who want to start their vehicle in computer vision or AI by understanding what face detection is, its types and how it works.
In the past few years, face recognition has received wide attention and is regarded as one of the most promising applications in the field of image analysis. Face detection can consider a large part of the face recognition operation. Computing resources are focused on the part of the image that holds the face based on its intensity. Face detection in images is complex because of variability such as posture, expression, position and orientation, skin color, presence of glasses or facial hair, differences in camera gain, lighting conditions and image resolution.
Object detection is a kind of computer technology, it is related to image processing and computer vision, and it interacts with human faces, buildings, trees, cars and other objects detection instances. The main purpose of face detection algorithm is to determine whether there are any faces in the image.
In recent years, a lot of research work has been put forward in the field of face recognition and face detection to make it more advanced and accurate, but when Viola-Jones introduced the real-time face Detector, it brought a revolution in the field, it is able to detect faces in real time and with high accuracy.
Face detection is the first and essential step of face recognition, which is used to detect faces in images. It is part of object detection and can be used in many fields, such as security, biometrics, law enforcement, entertainment, personal safety, etc.
It is used to detect faces in real time to monitor and track people or objects. It is widely used in cameras to identify multiple look-alike ex-Mobile cameras and DSLR frames. Facebook is also using face detection algorithms to detect and recognize faces in images.
Face detection method:
Yan, Kriegman, and Ahuja propose a classification of face detection methods. These methods fall into four categories, and face detection algorithms can belong to two or more groups. These categories are as follows:
1. Knowledge-based:
Knowledge-based approaches rely on rule sets to detect faces based on human knowledge. Ex- A face must have a nose, eyes and mouth at a certain distance and position from each other. A big problem with these approaches is the difficulty of establishing a proper set of rules. If the rules are too general or too detailed, there can be a lot of false positives. This method alone is not enough to find many faces in multiple images.
2. Based on features:
The feature-based approach is to locate a face by extracting its structural features. It was first trained as a classifier and then used to distinguish between facial and non-facial areas. The idea is to overcome the limitations of our instinctive knowledge of faces. The method is broken down into several steps and even involves photos of many faces, and they reported a 94% success rate.
3. Template matching:
The template matching method uses predefined or parameterized face templates to locate or detect faces through the correlation between templates and input images. For example, the human face can be divided into eyes, facial outline, nose and mouth. In addition, face models can be constructed from edges only by using edge detection methods. This method is simple to implement, but insufficient for face detection. However, deformable templates have been proposed to deal with these problems.
4. Based on appearance:
The appearance-based approach relies on a set of delegated trained face images to find the face model. The appearance-based approach is superior to other performance approaches. In general, appearance-based approaches rely on statistical analysis and machine learning techniques to find relevant features in facial images. This method is also used for feature extraction of face recognition.
The appearance-based model is further divided into sub-methods for face detection, as shown below
4.1. Based on feature face:
A feature-face based algorithm for face recognition, which is a method of efficiently representing faces using principal component analysis.
4.2. Based on Distribution:
Algorithms such as PCA and Fisher discriminant can be used to define subspaces representing facial patterns. There is a classifier trained to correctly identify instances of the target pattern class from the background image pattern.
4.3. Neural network:
Neural network has successfully solved many detection problems, such as object detection, face detection, emotion detection and face recognition.
4.4. Support vector machines:
Support vector machines are linear classifiers that maximize the margin between the decision hyperplane and the examples in the training set. Osuna et al. Firstly, the classifier is applied to face detection.
4.5. Sparse network of Winnows:
They define a sparse network consisting of two linear units or target nodes. One represents a face pattern and the other represents a non-face pattern. It takes less time and is more efficient.
4.6. Naive Bayes Classifier:
They calculated the probability of a face appearing in an image by counting the frequency of a series of patterns on the training image. The classifier captures combined statistics of the local appearance and position of the face.
4.7. Hidden Markov Model:
The state of the model is a facial feature, often described as a pixel bar. HMM is often used in conjunction with other methods to build detection algorithms.
4.8. Information theoretical methods:
Markov Random field (MRF) can be used for facial patterns and related features. The Markov process uses the kullback-Leibler divergence maximization distinction between classes. Therefore, this method can be used for face detection.
4.9. Inductive Learning:
This method has been used to detect faces. Algorithms such as Quinlan’s C4.5 or Mitchell’s find-S are used for this purpose.
Working principle of face detection:
There are many techniques for detecting human faces. With these techniques, we can recognize human faces more accurately. These technologies have almost the same face detection programs, such as OpenCV, neural networks, Matlab, etc. Face detection works by detecting multiple faces in an image. Here we use OpenCV for face detection, and there are some steps of face detection operation, as shown below –
First, you import an image by providing its location. The image is then converted from RGB to grayscale, where it is easy to detect faces.
After that, image processing is used, in which the image is resized, cropped, blurred, and sharpened as needed. The next step is image segmentation, which is used for contour detection or segmentation of multiple objects in a single image so that the classifier can quickly detect objects and faces in the image.
The next step is to use the Haar-like feature algorithm proposed by Voila and Jones for face detection. The algorithm is used to find the position of a face in a frame or image. All faces share some common features of human faces, such as the eye region being darker than its neighbors and the nose region being brighter than the eyes.
Haar-like algorithm is also used for feature selection or feature extraction of objects in the image, and eyes, nose and mouth in the image are detected by edge detection, line detection and center detection. It is used to select basic features in images and extract these features for face detection.
The next step is to give the coordinates of x, y, W and H, which forms a rectangular box in the image to show the position of the face, or we could say the region of interest in the image. It can then create a rectangular box in the area of interest where faces are detected. There are many other detection techniques used together, such as smile detection, eye detection, blink detection, etc.
How to run face detector (webcam) in real time:
Requirements for running code – Python, OpenCV, Webcam, Numpy.
#import libraries import cv2 import numpy as np #import classifier for face and eye detection face_classifier = Cv2. CascadeClassifier(' Haarcascades/ HaarCascade_frontalface_default. XML ') # Import Classifier for Face and Eye Detection Face_classifier = Cv2. CascadeClassifier(' Haarcascades/ HaarCascade_frontalface_default.xml ') eye_classifier = Cv2 CascadeClassifier (' Haarcascades/ Haarcascade_eye.xml ') def face_Detector (img, size=0.5): cv2.CascadeClassifier (' Haarcascades/ Haarcascade_eye.xml ') def face_Detector (img, size=0.5): # Convert Image to Grayscale gray = cv2.cvtColor (img, Cv2. COLOR_BGR2GRAY) faces = face_classifier. DetectMultiScale (gray, 1.3, 5) If faces is () : return img # Given coordinates to detect face and eyes location from ROI for (x, y, w, Rectangle (x+w, y); rectangle (x+w, y+h); rectangle (x+w, y+h); 2) roi_gray = gray[y: y+h, x: x+w] roi_color = img[y: y+h, x: x+w] eyes = eye_classifier.detectMultiScale (roi_gray) for (ex, ey, ew, eh) in eyes: Rectangle (roi_color,(ex,ey),(ex+ew,ey+eh),(0,0,255),2) roi_color = rov2. flip (roi_color, 1) return roi_color # Webcam setup for Face Detection cap = cv2.VideoCapture (0) while True: Ret, frame = cap.read () cv2.imshow (' Our Face Extractor ', face_detector (frame)) if cv2.waitKey (1) == 13: #13 is the Enter Key break # When everything done, release the capture cap.release () cv2.destroyAllWindows ()Copy the code
In recent months, we have updated a lot of research and development processes related to pedestrian analytics. For TSINGSEE Qingxi Video platforms, we will gradually integrate the functions of smart analytics into a new smart analytics platform in the near future. Please follow us for updates.