You know, for most image-editing apps, they’re pretty good for regular face trims and filters.

But when you use them to “tinker” with photos, the performance is not so bright. Often image editing tools can borrow pixels from adjacent areas (as Photoshop does) to fill in the gaps, but the downside is obvious: you can’t determine what’s missing from the image.

Therefore, they can only be used to repair edges and corners, but cannot be used to reconstruct the image as a whole. This flaw is most obvious when it comes to restoring a photo, especially if you want to repair a photo that is decades old and you know what the missing parts of the photo were originally.

But no matter, AI has solved this problem. Recently, researchers at Nvidia developed a highly advanced deep learning technology that can restore missing content from an image. For example, if you have a photo of a face with an eye part missing, normally you would use editing tools like Photoshop to fill it in with pixels from adjacent parts, not the human eye, but nvidia’s AI knows how to fill in the eye in the missing part, even if the result sometimes doesn’t look quite right.

In fact, there have been many attempts to repair images with AI-based technologies before, but they were limited to repairing rectangular sections, focusing only on the central part of the image, unable to repair any missing parts of the photo, and often required a lot of time for post-processing. However, Nvidia’s AI is different.

It can reliably handle missing parts of any shape, size, or distance from the edge of a photo. What’s more, even when the missing part of the photo keeps getting bigger, the AI can take it easy and gracefully fix it.

Why is this AI so good?

As the researchers explain in their published paper, they used a method called partial convolution, which adds a partial convolution layer to the neural network. It consists of two parts: mask renormalization and mask update. Mask-type renormalized convolution is also called segmentation-aware convolution in image segmentation tasks. This technique ensures that the output of the model used to patch missing pixels does not depend entirely on the input values, and thus can patch missing parts of any shape, size, and location. Part of the convolution layer can normalize each output according to the correctness of the corresponding acceptance domain, so as to ensure that the output value is independent of the missing pixel value of each acceptance domain. In addition, researchers also applied some loss functions, VGG model of matching feature loss and style loss to make the generated results of AI more realistic and matching.

Even though the AI doesn’t know exactly what’s missing, the result often looks like a perfect match.

How is the master of ARTIFICIAL map made?

It took a lot of effort to train the AI. The researchers first produced 55,116 mask images with missing parts of any shape and size for training, and prepared about 25,000 more for testing. They then further subdivided the images into six categories based on the size of the missing parts to optimize the AI’s recovery.

Using PyTorch, a deep learning framework powered by Nvidia’s Tesla V100 GPU and cuDNN, we applied mask images from ImageNet, Place2, and Celeba-HQ datasets to train the model.

During the training phase, the researchers introduced the missing parts of the image into the full training image from these datasets, allowing the AI to learn how to reconstruct the missing pixels in the image.

In the test phase, the researchers introduced the missing part of the image not applied in the training phase into the test image, so as to verify the accuracy of AI reconstruction of missing pixels without bias.

The final training of this AI model, the effect of image repair is better than other image repair technology. In the paper, the researchers compared it with four other image-tinkering techniques:

  • PatchMatch: referred to as PM, PatchMatch is a very advanced image repair method without machine learning technology at present.

  • GL: Image tinkering technique proposed by Iizuka et al.

  • GntIpt: A generative image repair method with context attention proposed by Yu.J et al.

  • Conv: A traditional neural network architecture without the application of “partial convolution”.

The researchers named their AI PConv, and here’s a comparison of the five methods used to patch the same image:

What we can see is that nvidia’s AI repair works better than any other method, regardless of the image. More effects can be found here.

Of course, the result of AI’s redrawing is not perfect all the time. In the video, you will find that some facial features are obviously “borrowed” from other places. If the missing part is very large and there is not enough information for THE AI to reconstruct pixels, the effect of AI’s repair will be poor.

But all in all, it’s still an amazing and amazing practical technique. With it, photos that previously seemed impossible to recover can be restored to the fullest extent of the missing content without taking hours of painful repair. In addition, nvidia researchers say THE AI can display large photos without compromising the image’s authenticity. Perhaps the days when we used photoshop tools to patch and matte are gone forever.