My friends, for reprint please indicate the source: blog.csdn.net/jiangjunsho…
GAN currently has many applications in the direction of face generation. DeepFake, a Reddit/Twitter project launched earlier this year, simply swaps faces in videos and photos. This might not seem like a difficult feature for a graphics expert, but DeepFake relies entirely on the computer’s own capabilities, and the results are so good that the modifications are sometimes barely noticeable.
The first version of DeepFake used autoencoder technology, but there is an improved version of DeepFake on the web called Faceswap-gan. Here’s how it works. The following figure is the schematic diagram of the training and testing stages of FacesWap-GAN. During the training process, A large amount of face A data is needed, which is distorted by the algorithm to make it different from face A. Then, A mask is generated on the reconstructed face through the autoencoder, and finally the data of face A is restored through the mask information and the previously input information. In the testing process, the network will regard the information of face B as the distorted face of the training set and restore it to the state of face A through the same steps.
The figure below shows the objective function of the above Faceswap-gan, which consists of three loss functions. The first is to reconstruct the loss, making sure the reconstructed face is similar to the original. The second is the antagonism loss in GAN, which requires the computer to determine whether the output face is real or generated. The last item is optional, which is the perceptual loss of face data, which is used to judge the overall similarity between the original image and the generated image.
The complete faceswap-gan code design can be found in the GitHub project source code. From the perspective of applications, “face changing” is still mostly used for online fun applications. For example, SnapChat, a well-known foreign social software, has a very popular filter, which can exchange users’ faces. In addition, from the perspective of commercial use, for example, the technology can be applied to the post-processing of film production, and the replacement of the human face of the stunt double can be completely processed by using this technology.
GAN has also been applied to many other face variations, such as age prediction. As you can see below, the input is a photo from when the user was younger, and the output is a prediction of how the user will look as they age.