How to color black and white photos based on deep learning
Dear readers, I wrote a tutorial on the restoration of old photos early last month. Some readers said that they were not satisfied with it and wanted me to arrange another tutorial on black and white photos, hence the origin of this article
DeOldify introduction
DeOldify is a project about coloring black and white photos. It is developed based on deep learning technology. The source code is open source on Github and has received 13.4K Star. Last year, a blogger made a video about restoring 100-year-old Beijing
The DeOldify framework is used for coloring. DeOldify was created to color black and white photos, but what’s amazing is that it can handle videos as well as images.
The core network framework of DeOldify is GAN, which has the following characteristics compared with the previous coloring technology:
- 1. Artifacts in old photos will be eliminated in the coloring process;
- 2. For the face parts of old photos, the skin will become smoother after treatment;
- 3. Render a more detailed and realistic rendering effect;
After a brief introduction, here is a formal introduction to its usage. The test environment of this time is as follows
- OS: Windows 10;
- Python: Python 3.7.6;
- IDE: Pycharm;
- Pytorch: 1.7.0;
2. Environment configuration
2.1. Clone the project locally
There are two options
Git command
[email protected]:jantic/DeOldify.git
Copy the code
2. Click the Download ZIP option on the Github project page to manually Download the project to your local computer with the Github address posted below
Github.com/jantic/DeOl…
2.2. Download the weight file
DeOldify is developed based on deep learning, which requires pre-training weights. Here, the project developer has uploaded the trained weights to the Internet, and we can use them directly without any further training
There are many weight files used in this project, three in total:
- 1. Artistic weights will make the painting more effective
bold
A few, download address:
https://data.deepai.org/deoldify/ColorizeArtistic_gen.pth
Copy the code
- 2, Stable weight, better than Artistic painting effect
conservative
A few, download address:
https://www.dropbox.com/s/usf7uifrctqw9rl/ColorizeStable_gen.pth
Copy the code
- 3, Video weight, this weight file is used to color the Video, download address
https://data.deepai.org/deoldify/ColorizeVideo_gen.pth
Copy the code
Once the weight files are downloaded, create a Models folder in the project root directory and place the downloaded three weight files in the Models folder, as shown below
2.3. Install dependencies
Enter the following command on the terminal to install the third-party extension library required by the project.
pip install -r requirements.txt
Copy the code
After the third party extension library is installed, please note that this project is based on Pytorch. The author does not include this torch dependency library in requiredings.txt. Make sure the torch, TorchVision, and other libraries are installed in your environment before starting the project.
If you want to use GPU acceleration in the future, CUDA, Cudnn and other GPU acceleration packages need to be shipped in advance
3 Start the Project
Once the environment is configured, the next step is relatively simple: create a script under the main entry to run the program
3.1. Photo coloring
To color the photo, fill the main script with the following code block
from deoldify import device from deoldify.device_id import DeviceId #choices: CPU, GPU0... GPU7 device.set(device=DeviceId.GPU0) from deoldify.visualize import * plt.style.use('dark_background') torch.backends.cudnn.benchmark=True import warnings warnings.filterwarnings("ignore", category=UserWarning, message=".*? Your .*? set is empty.*?" ) colorizer = get_image_colorizer(artistic=True) render_factor=35 source_path = 'test_images/image.png' result_path = None colorizer.plot_transformed_image(path=source_path, render_factor=render_factor, compare=True)Copy the code
There are a few parameters in the code to briefly introduce:
- Device GPU model used in the specification of inference. Select one from the following options according to your own situation
from enum import IntEnum
class DeviceId(IntEnum):
GPU0 = 0,
GPU1 = 1,
GPU2 = 2,
GPU3 = 3,
GPU4 = 4,
GPU5 = 5,
GPU6 = 6,
GPU7 = 7,
CPU = 99
Copy the code
device.set(device=DeviceId.GPU0)
Copy the code
- Artistic: True indicates that Artistic mode is enabled. False indicates that Stable mode is enabled.
colorizer = get_image_colorizer(artistic=True)
Copy the code
Artistic coloring is different from Stable because the weight files used are different. Artistic coloring is a bit more radical. For example, on this lotus leaf photo, different modes will give different results
Artistic patterns
Stable mode
- Render_factor specifies the rendering factor. The higher the value, the better the effect, but the larger the display memory is required.
- Source_path indicates the input image path.
The colored images will be stored in the result_images folder in the root directory. I downloaded several black and white images from the Internet and tested them with this framework. It looks good ~
Figure 2
FIG. 3
3.2. Video coloring
For video coloring, use the following code block
from deoldify import device from deoldify.device_id import DeviceId #choices: CPU, GPU0... GPU7 device.set(device=DeviceId.GPU0) from deoldify.visualize import * plt.style.use('dark_background') import warnings warnings.filterwarnings("ignore", category=UserWarning, message=".*? Your .*? set is empty.*?" ) colorizer = get_video_colorizer() #NOTE: Max is 44 with 11GB video cards. 21 is a good default render_factor=21 file_name_ext = file_name + '.mp4' result_path = None colorizer.colorize_from_file_name(file_name_ext, render_factor=render_factor)Copy the code
The usage of parameters in the code is referred to 3.1. Compared with coloring photos, video processing needs to consider the correlation between frames in addition to time consuming, which is more complicated.
Here, I downloaded a small part of the Godfather in 1972 from website B with the help of you-get command (only one frame is captured here because it is difficult to upload videos on the platform, please forgive my laziness,…).
After coloring, the result is as follows,
A closer look at the color-coded video does have some flaws, but overall it looks pretty good
4. Obtain the project source code
For convenience, I have packaged the configured project files into a decompression package. After decompression, I only need to install the required dependencies in the project, and no need to configure the weight files.
pip install -r requirements.txt
Copy the code
To obtain the source code of the project, follow the wechat public account: Xiaozhang Python, and reply the keyword 210501 in the background of the public account
5, summary
Project color principle here is not too much introduction, interested readers can read the source code in-depth understanding; If you just want to apply this project to your own data, I think this article has spoken very clearly, before I wrote an old photo repair article, in fact, the two can be combined, there will be a great effect ~
If this post has helped you, give me a thumbs up
That’s all for this article, thanks for reading, and we’ll see you next time