This is the 15th day of my participation in the August More Text Challenge. For details, see:August is more challenging

One, foreword

A few days ago, I wrote a blog about how to achieve special effects. I feel a little bit unsatisfactory, but there are not many simple application scenes of changing backgrounds. Today, I will implement a more complex special effects “shadow doper”. Now let’s welcome our star of the scene, Kun production for us to perform his famous chicken you are too beautiful.As for the implementation principle, there is no essential difference from the last article, which is also frame by frame processing, but here is a detailed description.

Two, the realization principle

First we have to prepare a video to use as our material. Then we’re going to extract the images from the video frame by frame, and then we’re going to use the PaddleHub to extract the portrait frame by frame. So we have our subjects and our doppelgasses. Finally, we need to process the image when writing the video. I directly pasted two characters on the original image, and the final synthesized video effect is the one above. Of course we need to add audio, so finally we need to read the audio and mix the new video with the audio. We divided the process into the following steps:

  1. The image is extracted frame by frame
  2. Batch cutout
  3. Composite image (Shadow doper)
  4. Written to the video
  5. Read the audio
  6. Mixed flow

And eventually we’ll be able to do a full video.

Iii. Module installation

For convenience, we all use PIP installations:

pip install pillow
pip install opencv-python
pip install moviepy
# installation paddlepaddle
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
# installation paddlehub
pip install -i https://mirror.baidu.com/pypi/simple paddlehub
Copy the code

Also don’t nonsense, if the installation process out of any problems can be baidu or contact the blogger, I will try to answer, after all, I am just a vegetable chicken.

Four, code implementation

Let’s first look at some imported modules:

import cv2
import math
import numpy as np
from PIL import Image
import paddlehub as hub
from moviepy.editor import *
Copy the code

Let’s follow the steps above, step by step.

4.1 Image extraction frame by frame

We need to use opencV, the code is as follows:

def getFrame(video_name, save_path) :
	""" pass in the video name and save the image frame to save_path """ "
	# Read video
    video = cv2.VideoCapture(video_name)

    # Get video frame rate
    fps = video.get(cv2.CAP_PROP_FPS)
    # Get screen size
    width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
    size = (width, height)

    # Get frames
    frame_num = str(video.get(7))
    name = int(math.pow(10.len(frame_num)))
    ret, frame = video.read()
    while ret:
        cv2.imwrite(save_path + str(name) + '.jpg', frame)
        ret, frame = video.read()
        name += 1
    video.release()
    return fps, size
Copy the code

The only thing to note here is that The OpenCV version needs to be at least 3.0, and compatibility issues may arise if the version is older.

4.2 Batch picture matting

Batch matting needs to use our PaddHub model library, and the implementation of matting only needs a few lines of code:

def getHumanseg(frames) :
    """ Do a cutout of all images in the frames path """
    # Load model library
    humanseg = hub.Module(name='deeplabv3p_xception65_humanseg')
    # Traverse files in the path
    files = [frames + i for i in os.listdir(frames)]
    # cutout
    humanseg.segmentation(data={'image': files})
Copy the code

When we call this method, we will generate the humanSeg_output directory, where the image is.

4.3 Composite image (shadow doper)

Here we need to use our Pillow module, which provides a function for image pasting:

def setImageBg(humanseg, bg_im) :
    Param humanseg: :param bg_im: :return: ""
    # Read transparent images
    im = Image.open(humanseg)
    # Separate color channels
    r, g, b, a = im.split()
    # Paste a avatar to the right of the picture
    bg_im.paste(im, (bg_im.size[0] / /3.0), mask=a)
    # Paste a avatar to the left of the picture
    bg_im.paste(im, (-bg_im.size[0] / /3.0), mask=a)
    # Convert the graph to a type that OpencV can normally read and return it
    return np.array(bg_im.convert('RGB')) [: : : : -1]
Copy the code

So the main thing that I’m going to do is use paste.

4.4 Writing a video

Writing to the video is also done by OpenCV:

def writeVideo(humanseg_path, frames, fps, size) :
    """ """ """ """ """ """ """ """ """ ""
    # Write video
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out = cv2.VideoWriter('green.mp4', fourcc, fps, size)

    Set the background for each frame
    humanseg = [humanseg_path + i for i in os.listdir(humanseg_path)]
    frames = [frames + i for i in os.listdir(frames)]
    for i in range(humanseg.__len__()):
		# Read the original image
        bg_im = Image.open(frames[i])
		# Set the avatar
        im_array = setImageBg(humanseg[i], bg_im)
        # Write video
        out.write(im_array)
    out.release()
Copy the code

Now that we’ve implemented a video, but we don’t have any sound yet, we need to use Moviepy to mix the audio.

4.5, mixed flow

What we do with the mixed stream is we get the audio first, and then the mixed stream, and the audio we just need to read the audio of the original video:

def getMusic(video_name) :
    """ get the audio of the specified video """
    # Read the video file
    video = VideoFileClip(video_name)
    # Return audio
    return video.audio
Copy the code

VideoFileClip is a video processing class in Moviepy. Let’s add music:

def addMusic(video_name, audio) :
    """ Implement mixed stream, add audio to Video_name """
    # Read video
    video = VideoFileClip(video_name)
    Set the audio of the video
    video = video.set_audio(audio)
    # Save the new video file
    video.write_videofile(output_video)
Copy the code

Output_video is a variable that we define to save the file path. Note that the full path (path + name) cannot be the same as the original video.

4.6. Achieve special effects

That is, putting the whole process together:

def changeVideoScene(video_name) :
    """ :param video_name: file of the video :param bgname: background image :return: """

    Read every frame in the video
    fps, size = getFrame(video_name, frames)

    # Batch cutout
    getHumanseg(frames)

    # Write frame 1 frame to video
    writeVideo(humanseg_path, frames, fps, size)

    # mixed flow
    addMusic('green.mp4', getMusic(video_name))
Copy the code

There are some variables above that we haven’t defined yet, so let’s define them in main:

if __name__ == '__main__':

    The root directory of the current project
    BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "."))
    # Each frame saved address
    frames = BASE_DIR + '\\frames\\'
    # The position of the picture
    humanseg_path = BASE_DIR + '\\humanseg_output\\'
    The path to save the final video
    output_video = BASE_DIR + '\\result.mp4'

    # Create folder
    if not os.path.exists(frames):
        os.makedirs(frames)
    if not os.path.exists(background_path):
        os.makedirs(background_path)
    Add special effects to the video
   	changeVideoScene('jntm.mp4')
Copy the code

This completes our special effects. Interested readers can refer to relevant blog blog.csdn.net/ZackSock/ar… In addition, you can follow my personal public account: ZackSock.