Flying pigeon, captain push senior algorithm engineer


preface

Today is the annual Lantern Festival, Mr.Tech first wish everyone a happy Lantern Festival, reunion. Have said good yuanxiao before the end of the year. Individual push technology research and development engineer flying pigeon and captain with their own romantic and artistic temperament, combined with style transfer technology, for everyone to create a unique festive atmosphere. As shown in the following three columns, they took the original image of column 1 as the basis and transferred the style of column 2 to the original image to produce the distinctive Yuanxiao picture (column 3). Have you ever seen such yuanxiao works?




(The following is an introduction to the principle of style transfer.)


Style migration application instance

Prisma, the phenomenally popular app from the Russian team, is a simple image beautification tool. There are so many applications for image enhancement, what’s the beauty of Prisma? Let’s take a look at the beautification effect.


Prisma applied the original image on the left and rendered it on the right, transforming a realistic photograph into an artistic oil painting. The process of moving from left image to right image can be called image style transfer, and that’s what we want to talk about today.




Introduction to the

Image style transfer refers to the combination of content image and style image to generate a new image, which is similar to content image in content, but similar to style image in style. Similar to what is a more difficult to define the problem, the neural network is applied to the style before the migration, all follow the same idea: based on the style of a particular image, establish a corresponding with the style of mathematical or statistical model, and then change the need to transfer the content of the image, making it better conform to the established model.


The method of image style transfer based on Neural network is written by Texture Synthesis Using Convolutional Neural Networks and A Neural Algorithm of Artistic in 2015 Style is presented in two papers. The first paper proposes a method to model texture with deep learning. The second paper, based on convolutional neural network, takes local features as approximate image content and obtains a system to separate image content and image style. Then, the content and style are combined to form the image after style transfer. Based on these two texts, Foreign engineers have successively published real-time fixed Style image Transfer “Prototype Losses for Real-Time Style Transfer and Super-Resolution” and “Meta Networks”, a rapid Transfer of random Style images Two technical achievements for Neural Style Transfer.


The following is a brief introduction of the last three papers on image style transfer based on neural network.





Style transfer works

The article Texture Synthesis Using Convolutional Neural Networks proposes an idea that textures can be described by statistical models of local features of images. With this premise, the image style transfer model based on neural network emerges. A Neural Algorithm of Artistic Style describes in detail how to use Neural networks to achieve image Style transfer.


In convolutional neural networks, shallow features generally represent the visual features of the image such as edge, contour, texture and color, while deep features refer to the semantic features of the image. The deeper it goes, the more semantic features become stronger and stronger, that is, more representative of the specific content of the image. To compare the content similarity of two images, we can compare the similarity of features in the deep network of two images. Here, Euclidean distance is used to measure the similarity between features, and the specific formula is as follows:

We use shallow features of the two images to compare stylistic similarities between the images. Since shallow features also contain many local features of images, Euclidean distance cannot be directly used to compare similarity, so Gram matrix is used in this paper to compare style similarity.

The formula for calculating style similarity is as follows:


The similarity between two images can be evaluated by the sum of content loss and style loss:

The intuitive network structure diagram is as follows:

(A Neural Algorithm of Artistic Style)


The specific code implementation can be referred to: neural-style-transfer

https://github.com/titu1994/Neural-Style-Transfer



Real-time style migration

Migration model based on the above image style can get relatively good effect of migration, but need to image trained individually for each image content and style, so the image generated was slow, and then someone put forward the improvement: first building a switching network, and then by optimizing the weights of switch network to achieve rapid migration of style. The paper, Perceptual Losses for Real-time Style Transfer and Super-Resolution, details the process.


In order to solve the time-consuming problem, this paper adopts the method of end-to-end training to train the DCNN model. The model adds a style to an object by entering a single content image.

(Images from Perceptual Losses for Real-time Style Transfer and Super-Resolution)


First, the paper uses trained VGG16 as an image feature extractor. As can be seen from the figure above, only the feature maps of RELU3_3 layer were used for content diagram training, while the feature maps of RELU1_2, RELU2_2, RELU3_3 and RELu4_3 were used for style diagram training.


Next, the method requires pre-training of a perceptual loss network, which is used to calculate perceptual loss between content and style images. However, how to define the loss function is a difficult problem. It is difficult for computers to define exactly what style an image should have because it is relatively subjective. What is style anyway? To put it another way, without content in an image, that is style, and the content of an image is most concerned with the position of a particular pixel. Style, on the other hand, is insensitive to location information. Thus, the paper adopts the form of Gram matrix:

Gram matrix is aimed at a feature graph. For a feature layer with N channels, n* N groups are obtained by pairwise combination. For each group, the sum of the dot product of two two-dimensional matrices is calculated and then flattened. Thus the resulting Gram matrix removes the position information in the feature.

(Images from Perceptual Losses for Real-time Style Transfer and Super-Resolution)


The figure above shows the characteristic maps of each layer after the optimization of the corresponding loss function of the style map. It can be seen that feature images of different layers reflect the style of the original image in different granularity. On the premise of preserving the style, these feature images remove the location information of pixel points of the original image, so as not to affect the content image.


In addition to the aforementioned perceptual loss, in order to maintain the features of the lower layer after style conversion, the paper also introduces two simple loss, one is Pixel loss:

This loss function is only used for training purposes.


Another is total variation loss:

The loss function is used to improve the smoothness of the image.

For details, see fast_neural_style


https://github.com/pytorch/examples/tree/master/fast_neural_style





Speed style transfer

General image Style Transfer network can only convert one Style, but in the paper Meta Networks for Neural Style Transfer, the author designed a Meta network to input images of different styles into the Meta network and output different weight parameters. These parameters are then input into the transformation network for style transfer of content and image. The model structure is shown in the figure below:

(Image from Meta Networks for Neural Style Transfer)


The style Image is input into the fixed VGG-16 Network to obtain the style features, and then the weight parameters are obtained through two full connection layers, and then input into the Image Transformation Network to generate the transferred Image.


conclusion

A Neural Algorithm of Artistic Style through the training of content images and Style images, changing the weight constantly to achieve the transfer of Style. The paper “Perceptual Losses for Real-Time Style Transfer and Super-Resolution” builds fixed Style image Transfer model, then trains and optimizes the model, and inputs arbitrary content images to realize rapid image Style Transfer. The paper “Meta Networks for Neural Style Transfer” uses the model to generate weights, which can quickly realize the image migration of any content and any Style.


If you are not satisfied with Prisma app’s fixed filters, you can use the principle of style transfer to create your own photo beautification tool to make your images embody your unique artistic style.


conclusion

This Lantern Festival, the flying pigeons of Getui and the captain of the boat use technology and artistic expertise to create a unique Lantern Festival works, to convey the warmth of the festival to everyone. I believe that we can also use style transfer and other ways, so that they have a good Lantern Festival out of the ordinary. Once again, I wish you a happy Lantern Festival!


References:

1. Texture Synthesis Using Convolutional Neural Networks

2. A Neural Algorithm of Artistic Style

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

3. Meta Networks for Neural Style Transfer

4. Neural-Style-Transfer-Papers-Code

5. A brief history of Neural Style