The target

I’ve never touched DeepFakes before, but I just wanted to send a video from Site B. It’s a bit of a hassle to try, but here’s where you keep track of your potholes.

The goal of this article is to change The video of The Singing Trump into our comrade Chuan Jianguo.

Final effect:

Video link: www.bilibili.com/video/BV12p…

The environment that

The environment this article tries is Linux server environment, because run relatively fast.

Python environment: Anoconda Python3.7

GPU: K80, 12G video memory

DeepFake version: 2.0

Other tools: FFMPEG

Material preparation

First, you need to prepare one or more videos of The Singing Trump, as well as videos of Mr. Chuan Jianguo. Used for face swapping.

Video segmentation

First, the video material is cut into multiple images by FFMPEG.

Mkdir output ffmpeg-i your video. Mp4-r 2 output/video-frame-t-%d.pngCopy the code

It doesn’t have to be MP4, other formats are fine, and then-r 2So it’s 2 frames, so it’s 2 frames per second, so you can try this with your own video. Finally, output to the output folder, whatever prefix you want to define, the name is not the key.

Better get a couple of videos here, because Deepfake will tell you to make sure you have more than 200 faces, so I’ve got three for a total of six.

ffmpeg -i sing_trump1.mp4 -r 2 sing_trump_output/st1-%d.png
ffmpeg -i sing_trump2.flv -r 2 sing_trump_output/st2-%d.png
ffmpeg -i sing_trump3.mp4 -r 2 sing_trump_output/st3-%d.png
Copy the code
ffmpeg -i trump1.webm -r 2 trump_output/t1-%d.png
ffmpeg -i trump2.mp4 -r 2 trump_output/t2-%d.png
ffmpeg -i trump3.mp4 -r 2 trump_output/t3-%d.png
Copy the code

It’s pretty big. It adds up to 3.7 Gs.

Clone code + install dependency

Not much to say here, code from Github.

git clone https://github.com/deepfakes/faceswap.git
Copy the code

Then according to their own actual situation to install the environment, I here is now the PC to install CPU this, and then install nVIDIA in the server.

To extract the face

Now I’m going to pull all the faces out.

python3 faceswap.py extract -i trump_output -o trump_output_face
python3 faceswap.py extract -i sing_trump_output -o sing_trump_output_face
Copy the code

Here’s what happens when you’re done with the face.

Screening of the face

The next step is to manually delete all the faces we don’t need.

Modify the alignment

When we call Extract to generate a face, a proofread file is automatically generated to save the face information on the original image.After deleting the face, you need to align the face with the original image.

Here you can open the GUI tools

python3 faceswap.py gui
Copy the code

Then select Alignments under Tools.

And then selectRemove-FacesThen enter the alignment file path, the face path, and the original image path.

Then click the green button to start and run.

Then do the same for sing_trump_OUT.

Start training

Next, you can start training. The -m parameter is where to save the model.

python3 ./faceswap.py train -A sing_trump_output_face -ala sing_trump_output/alignments.fsa -B trump_output_face -alb trump_output/alignments.fsa  -m model
Copy the code

A small problem

If using GPU here, I found that TensorFlow2.2 starts with Cuda10.1 +, but I can’t install it here, so I need To use TensorFlow1.14 or TensorFlow1.15, which requires DeepFake 1.0.

Github.com/deepfakes/f…

Training screenshots

I found that faceswap1.0 and the master branch operate the same, not much has changed.

My speed here is about 100 steps in two minutes.

Convert video

Prepare video frames

First of all, we’re going to prepare the video that we’re going to convert, and then we’re going to slice it up, so it’s not going to be the same frame as before.

Ffmpeg - I sing_trump2. FLV input_frames/video - frame - % d.p ngCopy the code

My video here is 1 minute and 41 seconds.

We converted about 3050 images, which is about 30 frames, and 7.1 gigabytes (256 gigabytes on a MAC is a bit of a pain).

Line it up again

Next, we need to do one more face alignment for the video images we are transforming, starting with a face swipe.

python3 faceswap.py extract -i input_frames -o input_frames_face
Copy the code

Then delete the extra Faces, using the GUI tool to select Remove-Faces as before, and then align them.

Perform AI face swap for each frame

Run the convert command to convert

python3 faceswap.py convert -i input_frames/ -o output_frames -m model/
Copy the code

My speed here is about 1 picture per second, but the real face only has more than 600 pictures, if the face is more dense, I guess it may not be so fast, all the images conversion is about 5 minutes more (the GPU at that time there are other programs running real may be a little faster).

The effect

After 20 minutes of practice

After 1200step, it looks like this. It doesn’t look very good, but it’s interesting.

After an hour of practice

After a day of training

Combine the images into a video

Finally, the images are combined into a video by FFMPEG.

ffmpeg -i output_frames/video-frame-%d.png -vcodec libx264 -r 30  out.mp4
Copy the code

I found that the merge here is 2 minutes, but the impact is not too big, after all, we still need to edit, use PR and other software to edit again.

conclusion

Looking at the video, you can see that faceswap didn’t recognize the face when it was smaller, so it didn’t make the substitution, which is a bit of a shame.

Personally, I feel that the most time-consuming process of Deepfake is the removal of unwanted faces.