Overnight, the circle of friends are “ant teeth black”! Netizens are worried about…….” Baby, don’t worry, we have ModelArts!” Yeah, let’s do it with ModelArts and not have to worry about “Someone copying my face?” And do not worry about the large watermark. However, using someone else’s face could be a real legal risk! This paper will introduce how to make use of the one-stop AI development platform to realize the generation of “ant tooth black” small video by “idiot” operation.

Environment to prepare

Based on cloud vendor Notebook platform and cloud storage practices.

Model and material preparation

This implementation uses the first-order motion model for image animation, which is an image animation method based on key points and local affine transformation, paper address: arxiv.org/abs/2003.00…

Download pre-training models and materials

I am very sorry that I cannot provide you with OBS path to download directly due to the recent shortage of money. I have uploaded the pre-training model and materials to the AI Gallery data set, please download them to your OBS by yourself. Of course, if you have a quick download address, welcome to share! Source address: drive.google.com/drive/folde… Or drive.google.com/drive/folde… Because it is the source file, it does not contain the original video material of “Ant Tooth Black”. However, I have added it to the AI Gallery data set. If you still need it, you can directly contact me. Welcome to join ModelArts developer community, partners in Guangzhou area can join us to create MDG Guangzhou Station!

  • AI Gallery is an ecological community of developers built on the basis of ModelArts, providing sharing and trading of models, algorithms, HiLens skills, data sets and more. So you can download distributed data sets or files to your OBS, please follow the corresponding policies and rules when using! Open the 🔗Marketplace.huaweicloud.com/markets/aih…Click on theDownload buttonEnter theDownload details, set the OBS path,determineDownload to download the model and material into their own OBS, such as my path is/modelarts-lab/first-order-motion-model. Download progress can be found in the AI GalleryPersonal center – my downloadsLook at it.

JUST DO IT — ModelArts My notebook

So let’s start usingModelArts– My laptopIt is an out-of-the-box online integrated development environment that makes it easy to build, train, debug, and deploy machine learning algorithms and models. Currently, the free specification is used for experience. Note that if the specification is not used within 72 hours, resources will be released. Therefore, you need to pay attention to file backup. Notebook can be used for free, remember to choose the GPU environment!

When we useMy notebookThe CPU environment is enabled by default, so you need to switch to the GPU environment. At presentModelArts- My notebooksupport8 vCPU + 64 GiB + 1 x Tesla V100-PCIE-32GB.

newPytorch 1.0the.ipynbFile, start our “Ant ah hey” experience journey.

  • Download the code

    ! git clone https://github.com/AliaksandrSiarohin/first-order-model # or ! git clone https://codehub.devcloud.cn-north-4.huaweicloud.com/ai-pome-free00001/first-order-model.gitCopy the code

    Github is slow. You are advised to save github to Huawei cloud code hosting platform. The repository address I’ve cached is provided here, so I won’t show you how to migrate Github code to CodeHub. (There is no guarantee that my account will be unavailable, so please upload code to Notebook in your own way!)

  • Use Moxing to copy files to JupyterLab. Copy the models and materials downloaded to OBS through Moxing, and replace them with your OBS path. 02. Mp4 is the template video of “Ant Hey”.

    Download files with Moxing
    import moxing as mox
    
    # Replace your OBS address here
    mox.file.copy_parallel('obs://modelarts-lab/first-order-motion-model/first-order-motion-model-20210226T075740Z-001.zip' , 'first-order-motion-model.zip')
    mox.file.copy_parallel('obs://modelarts-lab/first-order-motion-model/02.mp4' , '02.mp4')
    Copy the code
    # decompression! unzip first-order-motion-model.zip# Template video! mv 02.mp4 first-order-motion-model/Copy the code

  • When the preparation is done, JUST DO IT! Switch to the first-order-model directory, and replace the path in source_image_path with the path for “your face.” A photo of your face can be uploaded directly through the Notebook’s file upload feature. Of course, you can also replace the default “Ant tooth black” video with your own video, mp4 format. A preview of the composition can be seen by executing all the way.

    cd first-order-model
    Copy the code
    import imageio
    import numpy as np
    import matplotlib.pyplot as plt
    import matplotlib.animation as animation
    from skimage.transform import resize
    from IPython.display import HTML
    import warnings
    warnings.filterwarnings("ignore")
    
    # Replace this with your image path, the best picture is 256*256, which is the view of Putin
    source_image_path = '/home/ma-user/work/first-order-motion-model/02.png'
    source_image = imageio.imread(source_image_path)
    
    # Here can be replaced with your video path, this Rimmer thinks "ant teeth black"
    reader_path = '/home/ma-user/work/first-order-motion-model/02.mp4'
    reader = imageio.get_reader(reader_path)
    
    
    # Resize images and videos to 256x256
    
    source_image = resize(source_image, (256.256[... :))3]
    
    fps = reader.get_meta_data()['fps']
    driving_video = []
    try:
        for im in reader:
            driving_video.append(im)
    except RuntimeError:
        pass
    reader.close()
    
    driving_video = [resize(frame, (256.256[... :))3] for frame in driving_video]
    
    def display(source, driving, generated=None) :
        fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6))
    
        ims = []
        for i in range(len(driving)):
            cols = [source]
            cols.append(driving[i])
            if generated is not None:
                cols.append(generated[i])
            im = plt.imshow(np.concatenate(cols, axis=1), animated=True)
            plt.axis('off')
            ims.append([im])
    
        ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000)
        plt.close()
        return ani
    
    
    HTML(display(source_image, driving_video).to_html5_video())
    Copy the code

  • Create a model and load the checkpoints

    Once this is done, we have the “Hey Ants” video – “generated. Mp4”, and that’s it? But here’s the thing…
    from demo import load_checkpoints
    generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml', 
                                checkpoint_path='/home/ma-user/work/first-order-motion-model/vox-cpk.pth.tar')
    Copy the code
    from demo import make_animation
    from skimage import img_as_ubyte
    
    predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True)
    
    # Save the resulting video
    imageio.mimsave('.. /generated.mp4', [img_as_ubyte(frame) for frame in predictions], fps=fps)
    /home/ma-user/work/
    
    HTML(display(source_image, driving_video, predictions).to_html5_video())
    Copy the code

  • If you download and open the generated file –generated. Mp4, you will be confused as I am: What about the sound? Yes, the sound is missing, because the core code only deals with images, and the sound needs to be retrieved by us because we use Moviepy. Not only that, we can also watermark the video.

    • Install Moviepy and get ready for video clips
    Install Moviepy! pip install moviepyCopy the code

    • Synthetic background music
    Add source video sound to generated video
    from moviepy.editor import *
    
    videoclip_1 = VideoFileClip(reader_path)
    videoclip_2 = VideoFileClip(".. /generated.mp4")
    
    audio_1 = videoclip_1.audio
    
    videoclip_3 = videoclip_2.set_audio(audio_1)
    
    videoclip_3.write_videofile(".. /result.mp4", audio_codec="aac")
    Copy the code

    • Other people pay to watermark, but I have to watermark
    You can also watermark the video
    video = VideoFileClip(".. /result.mp4")
    Please upload the watermark image by yourself
    logo = (ImageClip("/home/ma-user/work/first-order-motion-model/water.png")
            .set_duration(video.duration) # Watermark duration
            .resize(height=50) # The height of the watermark will be scaled equally
            .margin(right=0, top=0, opacity=1) # Watermark margins and transparency
            .set_pos(("left"."top"))) # Watermark position
    
    final = CompositeVideoClip([video, logo])
    final.write_videofile(".. /result_water.mp4", audio_codec="aac")
    
    final_reader = imageio.get_reader(".. /result_water.mp4")
    
    fps = final_reader.get_meta_data()['fps']
    result_water_video = []
    try:
        for im in final_reader:
            result_water_video.append(im)
    except RuntimeError:
        pass
    reader.close()
    result_water_video = [resize(frame, (256.256[... :))3] for frame in result_water_video]
    HTML(display(source_image, driving_video, result_water_video).to_html5_video())
    Copy the code

So far, this implementation has come to an end, about the “multi-player movement” – group photo solution has not yet explored, welcome to share guidance in the comments section, thank you!

This article is participating in the “Nuggets 2021 Spring Recruitment Campaign”, click to see the details of the campaign