Visit Flyai. club to create your AI project with one click
The author | Andrew_qian
www.cnblogs.com/qsyll0916/p/8893790.html
Introduce a,
I want to do facial expression analysis based on facial recognition. There are also many open source libraries available on the Internet, which provide great convenience for development. I choose the currently used dlib library for face recognition and feature calibration. Using Python also shortens development cycles.
Dlib contains a wide range of machine learning algorithms. All designs are highly modular, fast to execute, and very simple to use with a clean and modern C ++ API. It is used in a variety of applications, including robotics, embedded devices, mobile phones and large high-performance computing environments.
Although the application is more lofty, but their own mood analysis on the PC to do a small software or very interesting.
According to their own ideas and train of thought design identification. Keras, which is currently popular, seems to be based on changes in mouth shape as an indicator of mood.
My idea is to use the proportion of mouth opening, the degree of eye opening and the Angle of eyebrows as three indicators of emotion analysis. But because of the differences between people’s looks, the facial features are also different, plus my calculation method is relatively simple. So the recognition efficiency is not very high.
Identification rules:
The greater the ratio of mouth opening distance to the width of the facial recognition frame, the more emotional it is, either very happy or extremely angry.
Eyebrows are raised. The smaller the ratio between feature point 17-21 or 22-26 and the top of the facial recognition frame and the height of the recognition frame is, the more eyebrows are raised, which can indicate surprise and happiness. The oblique Angle of the eyebrows, eyebrows are generally raised when happy, frown when angry, at the same time the eyebrow pressure is more severe.
Squinting. People will automatically narrow their eyes when laughing and widen their eyes when they are angry or surprised.
Disadvantages of the system: it can’t capture the changes of subtle expressions, but can only roughly judge people’s emotions, such as happiness, anger, surprise and nature.
System advantages: simple structure, easy to use.
Application: smile capture, capture the beauty of the moment, alleviate children’s autism, interactive game development.
Due to the complexity of human emotions, these expressions cannot completely represent a person’s innermost emotional fluctuations. To improve the accuracy of judgment, comprehensive evaluation such as heart rate detection and speech processing is needed.
Ii. Development environment construction
1. Install VS2015 as this version of VScode is required for the latest release of Libb-19.10
2. Opencv installation (WHL installation) :
Download the required version of the WHL file from Pythonlibs, such as (opencv_python? 3.3.0 + contrib? cp36? cp36m? Win_amd64. WHL)
Then install it locally using PIP Install. Note the installation under file location (e.g. C:\download\xxx. WHL)
3. Install dLIB (WHL) :
Download the various versions of the Dlib WHL file here, then open CMD in the root directory to install directly. But to learn how to use the various Python instances in dlib, you need to download a dlib zip. Visit the dlib website: http://dlib.net/ml.html
Dlib all versions of WHL file: https://pypi.python.org/simple/dlib/
4. If you want to use face model feature calibration, you also need a face face shape predictor, which can be trained by your own photos, or you can use a trained predictor given by the author of dlib:
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
Iii. Implementation ideas
4. Specific steps
The first is using dlib for face recognition 🙂
import cv2
import dlib
from skimage import io
# Use get_frontal_face_detector
detector = dlib.get_frontal_face_detector()
# dlib’s 68-point model, using feature predictors trained by the authors
predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)
# image path
img = io.imread(“2.jpg”)
Generate an image window for dlib
win = dlib.image_window()
win.clear_overlay()
win.set_image(img)
# Instantiation of feature extractor
dets = detector(img, 1)
Print (” face count: “, len(dets))
for k, d in enumerate(dets):
Print (“, k+1, “)
“left:”, d.left(),
“right:”, d.right(),
“top:”, d.top(),
“bottom:”, d.bottom())
width = d.right() – d.left()
heigth = d.bottom() – d.top()
Print (‘ width*heigth ‘)
Then instantiate a Shape_Predictor object and use the DLIB author to train the facial feature detector to calibrate the facial feature points.
When calibration, opencV circle method is used to add watermark on the coordinates of feature points, the content is the serial number and position of feature points.
# Use a predictor to predict
shape = predictor(img, d)
# Mark the position of 68 points
for i in range(68):
cv2.circle(img, (shape.part(i).x, shape.part(i).y), 4, (0, 255, 0), -1, 8)
PutText (img, STR (I), (shape.part(I).x, shape.part(I).y), cv2.font_hershey_simplex, 0.5, (255, 255, 255))
# Show the processed image, then destroy the window
cv2.imshow(‘face’, img)
cv2.waitKey(0)
At this point, the information of the 68 feature points has been obtained, and it is necessary to make comprehensive calculation according to the coordinate information of the 68 feature points as the judgment indicator of each expression.
According to the above mentioned indicators of my judgment, the first calculation of mouth opening proportion, due to the distance from the camera, resulting in the size of the face recognition box is different, so choose proportion as a judgment indicator.
Before selecting a standard value for the index, photos of happy faces were analyzed. Calculate the average mouth opening ratio when you are happy.
The following is the data processing method for intercepting human eyebrows. A linear fitting is carried out on the five feature points above the left eyebrow, and a linear function line is fitted. The slope of the fitted line approximately represents the tilt degree of the eyebrow.
# eyebrows
Brow_sum = 0 # sum of heights
Frown_sum = 0 #
For j in range (17, 19) :
brow_sum+= (shape.part(j).y – d.top()) + (shape.part(j+5).y- d.top())
frown_sum+= shape.part(j+5).x – shape.part(j).x
line_brow_x.append(shape.part(j).x)
line_brow_y.append(shape.part(j).y)
self.excel_brow_hight.append(round((brow_sum/10)/self.face_width,3))
self.excel_brow_width.append(round((frown_sum/5)/self.face_width,3))
Brow_hight [0]+= (brow_sum/10)/self.face_width #
Brow_width [0]+= (frown_sum/5)/self.face_width
tempx = np.array(line_brow_x)
tempy = np.array(line_brow_y)
Z1 = np. Polyfit (tempx, tempy, 1) #
Self.brow_k = -round(z1[0], 3) #
I calculated the proportion of mouth opening, mouth width, eye opening degree and eyebrow tilt degree of happy expressions on 25 faces, and imported the Excel table to generate a line chart:
Through the line chart, it is obvious which parameters can be used, which parameters are not reliable, and which parameters can be used as an indicator within that range.
The same method is used to calculate the line graph when people are angry, surprised and spontaneous.
Through the analysis of multiple different facial expression data, the reference value of each index can be obtained, and a simple facial expression classification standard can be written:
# Case by case
# Open your mouth, either to be happy or surprised
If round (mouth_higth > = 0.03) :
If eye_hight > = 0.056:
Cv2.puttext (IM_rd, “amazing”, (D.ft (), d.Bottom () + 20), cv2.font_hershey_simplex, 0.8,
(0, 0, 255), 2, 4)
else:
Cv2.puttext (im_rd, “happy”, (d.left(), d.bottom() + 20), cv2.font_hershey_simplex, 0.8,
(0, 0, 255), 2, 4)
# No mouth, probably normal and angry
else:
If self. Brow_k < = 0.3:
Cv2.puttext (im_rd, “angry”, (d.left(), d.bottom() + 20), cv2.font_hershey_simplex, 0.8,
(0, 0, 255), 2, 4)
else:
Cv2.puttext (IM_rd, “nature”, (d.left(), d.bobtom () + 20), cv2.font_hershey_simplex, 0.8,
(0, 0, 255), 2, 4)
Five, the actual operation effect:
After identification:
Complete project code: https://gitee.com/Andrew_Qian/face/blob/master/from_video.py
— the End —