Green screen is a film and television cutout, change the background of the powerful tool, but if not in front of the green screen shooting, we can also be perfect to change the background? Researchers at the University of Washington recently uploaded a paper showing how you can transform video backgrounds perfectly without being in front of a green screen, turning the whole world into your green screen.
Heart of Machine reporting, participation: Racoon, Zhang Qian.
-
The thesis links: https://arxiv.org/pdf/2004.00626.pdf
-
Making the link: https://github.com/senguptaumd/Background-Matting
-
Backgrounds around fingers, arms, and hair are copied into the mask;
-
Image segmentation failure;
-
The important part of the foreground color is similar to the background color;
-
There is no alignment between the image and the background.
The main idea of this method is that the main error in mask estimation will lead to the distortion of the composite image under the new background. For example, a bad mask may contain some of the original image background, which when combined with the new background will copy some of the content of the previous background into the new background. Therefore, we train an adversarial discriminator to distinguish synthetic images from real images to improve the performance of masked networks.
git clone https://github.com/senguptaumd/Background-Matting.gitCopy the code
Conda create --name back-matting python=3.6 conda activate back-mattingCopy the code
export LD_LIBRARY_PATH=/usr/local/ cuda - 10.0 / lib64export PATH=$PATH:/usr/local/ cuda - 10.0 / binCopy the code
Conda Install PyTorch =1.1.0 TorchVision CudatoolKit =10.0 -C PyTorch PIP install Tensorflow - GPU =1.14.0 PIP install -r requirements.txtCopy the code
-
Image with people (extension _img.png)
-
No background image for people (extension _back.png)
-
Insert the target background image of the character (stored in the data/background folder)
-
Pre-processing
-
Segmentation
-
Background Matting needs a segmentation mask for the subject. We use tensorflow version of Deeplabv3+.
cd Background-Matting/
git clone https://github.com/tensorflow/models.git
cd models/research/
export PYTHONPATH=$PYTHONPATH: `pwd` : `pwd`/slimcd .. /.. python test_segmentation_deeplab.py -i sample_data/inputCopy the code
python test_background-matting_image.py -m real-hand-held -i sample_data/input/ -o sample_data/output/ -tb sample_data/background/0001.pngCopy the code