At that time, I was rewriting our neural Network Image Upgrade Service codebase to make it ready for bigger, faster apis and models. We rely on a typical image processing library such as OpenCV or PIL when we start to work on image generation (super-resolution, de-blurring, etc.). I’ve always wondered if Tensorflow’s image processing capabilities work. In theory, they should be faster. So I decided to stick to performing Tensorflow image processing locally and building datasets using dataset. Map to keep everything in my code. I found that not only did the new code I had written for super-resolution fail to reproduce any of the new web technologies, but the code I had written four months earlier didn’t work. Even weirder, the results of the super-resolution itself were sometimes very good, and the network worked, even though it didn’t achieve the desired goal.

debugged

What initially seemed like a small mistake led to 60 days of struggle and sleepless nights. My initial mistake was simply that there was something wrong with my network definition or training process. The pre-processing of the data was very good because I got meaningful results and visual control over the image processing. I adjusted everything I could find, using Keras, Slim, and the original Tensorflow, looking for different versions of Tensorflow and CUDA to see the changes. I’m ashamed to tell you about my recent suspicions, which involve GPU memory and static flaws. I am adjusting the perception loss and style loss to find the cause. Each iteration required a few days of retraining to get meaningful data.

I found the error yesterday when I was looking at Tensorboard. Almost subconsciously thinking something was wrong with the image, I ignored the network output and overlaid the target image and the input image in Photoshop, and here’s what I got:

It looks strange, it’s shifted a little bit. It defies any logic. It can’t be true! My code is very simple. Read the image, crop the image, resize the image, all in Tensorflow.

Anyway, RTFM has a “corner alignment” parameter. How do you want to reduce the image size rather than align it? You can! So this function has a very strange behavior that has been around for a long time — reading this thread. They couldn’t fix it because it would break a lot of old code and pre-trained networks.

This code actually moves your image one pixel to the left and up. Threads show that even interpolation is broken in TensorFlow. This is the actual scaling result in Tensorflow:

Stick to Scipy/OpenCV/Numpy/PIL, no matter which image manipulation you prefer. The second time I changed it, my network worked like a charm (actually the next day, I saw the results of the training).

This article was translated by Oleksandr Savsunenko, Anchor C., and edited by: Hu Said eight ways