Recently, I saw some photo apps offering the function of predicting the aging of faces. It was interesting to experience it, so I decided to try it myself. After searching on the Internet, I didn’t find a complete open source implementation for reference. Some related blogs simply mentioned some ideas and gave some effect pictures. Therefore, after learning from some previous ideas, I realized a plan of aging human face by myself. Now the concrete implementation steps and core code to share. The full Demo code is attached at the end of this article, and the final look is as follows:

Face aging implementation

The principle of the program implementation is a pre made good wrinkle texture “affixed to the” original face region, that sounds simple enough, but the need to consider many problems in the concrete implementation, let’s go on from after deducing what to need to solve the problem: first, how to make good wrinkle texture and face a natural fit in the artwork? Considering that the skin color and brightness of the face in different original images can vary greatly, it is obviously not feasible to provide different wrinkle textures for different skin tones. Secondly, the facial region of the prefabricated wrinkle texture is obviously not consistent with the face in the original picture, so it is necessary to carry out complex deformation of the wrinkle texture according to different facial feature points. Considering the above, the implementation steps of this program are divided into the following three steps:

  • 1. Identify the face area in the picture and extract the face feature points
  • 2, according to the facial feature points to carry out complex deformation of various areas of wrinkle texture
  • 3. The deformed wrinkle texture is naturally fitted to the face area identified in the original image

Let’s do it step by step:

Face recognition key points

The implementation of this step is relatively simple, with the help of the Face++ platform technology implementation, only a simple application for registration can be free use of face recognition function, the client only need to upload the picture to call the relevant Api, the face recognition feature point information returned roughly as shown in the figure below (picture from Face++) :

The wrinkle texture is deformed

Extracting the coordinates of feature points of wrinkle texture

Before deformation, face feature point coordinates corresponding to wrinkle texture should be obtained first. Since wrinkle texture is prepared in advance, feature point coordinate data can be extracted directly by obtaining image point coordinate tool:

Realization of deformation algorithm

Considering that this is a complex deformation based on feature points, OpenGL is used for rendering wrinkles texture images. IOS SDK provides encapsulated GLKit to facilitate the use of OpenGL, just create a GLKViewController:

glkView

import UIKit
import GLKit
class FaceGLKViewController: GLKViewController {...override func glkView(_ view: GLKView, drawIn rect: CGRect){···} ···}Copy the code

Create a new class, ImageMesh, to record the mesh point information in the wrinkles texture:

class ImageMesh: NSObject {
    var verticalDivisions = 0
    var horizontalDivisions = 0
    var indexArrSize = 0
    var vertexIndices: [Int]? = nil
    // Opengl coordinate points group
    var verticesArr: [Float]? = nil
    var textureCoordsArr: [Float]? = nil
    var texture: GLKTextureInfo? = nil
    var image_width: Float = 0.0
    var image_height: Float = 0.0
    var numVertices: Int = 0
    var xy: [vector_float2]? = nil
    var ixy: [vector_float2]? = nil
    
    convenience init(vd: Int, hd: Int) {
        self.init()
        verticalDivisions = vd
        horizontalDivisions = hd
        numVertices = (verticalDivisions + 1) * (horizontalDivisions + 1)
        indexArrSize = 2 * verticalDivisions * (horizontalDivisions + 1)
        verticesArr = [Float](repeating: 0.0.count: 2 * indexArrSize)
        textureCoordsArr = [Float](repeating: 0.0.count: 2 * indexArrSize)
        vertexIndices = [Int](repeating: 0.count: indexArrSize)
        xy = [vector_float2](repeating: [0.0.0.0].count: numVertices)
        ixy = [vector_float2](repeating: [0.0.0.0].count: numVertices)
        var count = 0
        for i in 0..<verticalDivisions {
            for j in 0...horizontalDivisions {
                vertexIndices![count] = (i + 1) * (horizontalDivisions + 1) + j; count+ =1vertexIndices! [count] = i * (horizontalDivisions + 1) + j; count+ =1}}let xIncrease = 1.0 / Float(horizontalDivisions)
        let yIncrease = 1.0 / Float(verticalDivisions)
        count = 0
        for i in 0..<verticalDivisions {
            for j in 0. horizontalDivisions {let currX = Float(j) * xIncrease;
                let currY = 1 - Float(i) * yIncrease; textureCoordsArr! [count] = currX; count+ =1textureCoordsArr! [count] = currY - yIncrease; count+ =1textureCoordsArr! [count] = currX; count+ =1textureCoordsArr! [count] = currY; count+ =1}}} ···}Copy the code

Then call the Opengl Api to finish rendering:

override func glkView(_ view: GLKView, drawIn rect: CGRect) {
    // Transparent background
    glClearColor(0.0.0.0.0.0.0.0)
    glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
    glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA));
    glEnable(GLenum(GL_BLEND));
    if (isSetup) {
        renderImage()
    }
}

func renderImage(a) {
    self.effect? .texture2d0.name = (mainImage?.texture?.name)!self.effect? .texture2d0.enabled =GLboolean(truncating: true)
    self.effect? .prepareToDraw() glEnableVertexAttribArray(GLuint(GLKVertexAttrib.position.rawValue))
    glEnableVertexAttribArray(GLuint(GLKVertexAttrib.texCoord0.rawValue))
    glVertexAttribPointer(GLuint(GLKVertexAttrib.position.rawValue), 2.GLenum(GL_FLOAT), GLboolean(GL_FALSE), 8, mainImage? .verticesArr) glVertexAttribPointer(GLuint(GLKVertexAttrib.texCoord0.rawValue), 2.GLenum(GL_FLOAT), GLboolean(GL_FALSE), 8, mainImage? .textureCoordsArr)for i in 0. <(mainImage? .verticalDivisions)! { glDrawArrays(GLenum(GL_TRIANGLE_STRIP), GLint(i * (self.mainImage! .horizontalDivisions *2 + 2)), GLsizei(self.mainImage! .horizontalDivisions *2 + 2))}}Copy the code

The next step is to realize the Deformation based on key points. The algorithm of Deformation is written according to the paper of Image Deformation Using Moving Least Squares. The content and derivation process of the paper are relatively simple, focusing on the final mathematical formula, so you can read it in detail if you are interested. For convenience, this scheme uses Swift to implement the algorithm. With the feature points on the wrinkle texture as the deformation origin and the face feature points returned by Face++ as the deformation target points, the wrinkle texture is deformed:

func setupImage(image: UIImage, width: CGFloat, height: CGFloat, original_vertices: [float2], target_vertices: [float2]) {
    let _= mainImage? .loadImage(image: image, width: width, height: height) setupViewSize()let count = target_vertices.count
    var p = original_vertices
    // Convert the coordinate system
    for i in 0..<count {
        p[i] = [p[i].x - Float(image.size.width / 2), Float(image.size.height / 2) - p[i].y]
        p[i] = [p[i].x * Float(width) / Float(image.size.width), p[i].y * Float(height) / Float(image.size.height)]
    }

    let q = target_vertices
    var w = [Float](repeating: 0.0.count: count)
    
    // Calculate the deformation weight
    for i in 0. < (self.mainImage? .numVertices)! {var ignore = false
        for j in 0..<count {
            let distanceSquare = ((self.mainImage? .ixy! [i])! - p[j]).squaredNorm()if distanceSquare < 10e-6 {
                self.mainImage? .xy! [i] = p[j] ignore =true
            }

            w[j] = 1 / distanceSquare
        }

        if ignore {
            continue
           }

        var pcenter = vector_float2()
        var qcenter = vector_float2()
        var wsum: Float = 0.0
        for j in 0..<count {
            wsum += w[j]
            pcenter += w[j] * p[j]
            qcenter += w[j] * q[j]
        }

        pcenter /= wsum
        qcenter /= wsum

        var ph = [vector_float2](repeating: [0.0.0.0].count: count)
        var qh = [vector_float2](repeating: [0.0.0.0].count: count)
        for j in 0..<count {
            ph[j] = p[j] - pcenter
            qh[j] = q[j] - qcenter
        }
            
        // Start the matrix transformation
        var M = matrix_float2x2()
        var P: matrix_float2x2? = nil
        var Q: matrix_float2x2? = nil
        var mu: Float = 0.0
        for j in 0..<count {
            P = matrix_float2x2([ph[j][0], ph[j][1]], [ph[j][1], -ph[j][0]])
            Q = matrix_float2x2([qh[j][0], qh[j][1]], [qh[j][1], -qh[j][0]])
            M += w[j] * Q! * P!
            mu += w[j] * ph[j].squaredNorm()
        }

        self.mainImage? .xy! [i] =M * ((self.mainImage? .ixy! [i])! - pcenter) / mu;self.mainImage? .xy! [i] = ((self.mainImage? .ixy! [i])! - pcenter).norm() * ((self.mainImage? .xy! [i])!) .normalized() + qcenter; }self.mainImage? .deform() isSetup =true
}
Copy the code

Finally, the deformed wrinkle texture is as follows:

Wrinkle texture fits the face

It is obviously not advisable to directly overlay the wrinkle texture on the human face. What we need to do is to mix the original image of the face with the wrinkle texture appropriately.

There are many commonly used modes of image mixing, such as overlay, soft light, strong light, etc., and the algorithm of each mixing mode is relatively simple to realize. The specific algorithm formula can be seen in this zhihu summary: Photoshop layer mixing mode calculation formula. More conveniently, CGContext has built-in implementation of these commonly used blend modes, which can be directly called by UIImage#draw method. My tests show that the soft light blend mode is the most ideal:

/// The face becomes old
///
/// - Parameters:
/// - face: face image
/// - Wrinkle: Wrinkle image
/// - faceRect: face area
/// - Returns: synthesizes results
func softlightMerge(face: UIImage, wrinkle: UIImage, faceRect: CGRect) -> UIImage? {
    let rendererRect = CGRect(x: 0, y: 0, width: face.size.width, height: face.size.height)
    let renderer = UIGraphicsImageRenderer(bounds: rendererRect)
    let outputImage = renderer.image { ctx in
        UIColor.white.set()
        ctx.fill(rendererRect)
        face.draw(in: rendererRect, blendMode: .normal, alpha: 1)
        // Soft light mix
        wrinkle.draw(in: faceRect, blendMode: .softLight, alpha: 1)}return outputImage
}
Copy the code

After the soft light mixing, regardless of the skin color of the original face, the mixed face will remain the original skin color, the final result is as follows:

conclusion

Achieve face grow old scheme has a lot of, I put forward the solution, advantage is that don’t have to consider the original face skin color, brightness and other factors, a prefabricated wrinkled face can apply to most of the pictures, the effect of faults is to grow old is only reflect more “lines”, the overall effect from real age have a lot of the gap.

In the implementation of the scheme, Swfit language is used in iOS, but the Opengl and related algorithms involved can be easily repeated in Android and other platforms, MLS deformation algorithm based on facial feature points can also be used to achieve more functions, such as beauty, face, big eyes, dressing, etc. High extensibility.

Program source code

  • FaceAgingDemo

That’s all for this share. If you like it, you can like 👍 or follow it. Feel free to point out any mistakes in the comments.

This article is personally original, please indicate the source of reprint.