Disclaimer: this blog has done the code test share, please refer to;

This article, excerpt from the image repair column, please refer to

  • 🍊 column: image repair – code environment construction – knowledge summary

  • 🍊 thanks to every reader for your support and recognition

  • 🍊 3D Photography Inpainting environment construction, effect testing [CVPR 2020]


📔 Basic Information


  • 3D Photography using Context-aware Layered Depth Inpainting
  • Github.com/vt-vl-lab/3…
  • Another 2D to 3D blog post

requirements.txt

Pytorch ==1.4.0 TorchVision ==0.5.0 OpencV-Python == 4.2.0.32vispy ==0.6.4 Moviepy ==1.0.2 Transforms3D ==0.3.1 NetworkX ==2.3 cynetworkx scikit-imageCopy the code

📕 Environment Construction


Server: Ubuntu 18.04 Quadro RTX 5000 16G

CUDA version V10.0.130

Conda create -n Torch14 Python =3.6.6 Conda activate Torch14 Conda install Pytorch ==1.4.0 TorchVision ==0.5.0 Cudatoolkit =10.0 -D PyTorch PIP install Opencv - Python PIP install Vispy PIP install Moviepy PIP install Transforms3D PIP Install networkx==2.3 PIP install scikit-imageCopy the code

📗 code effect


Paper proposed a method for converting a single RGB-D input image into a 3D photo…


Intuitive Results: Images Converted to MP4 Video An Amazing Technology…

  • Have interest, can consult next original paper
  • This blog post, belongs to a simple code run evaluation

📘 Project Structure



📙 Test [Version 1 2020]


Dear friends, I tested this blog last year. At that time, it was really easy to test successfully and get output from running


For the year 2020, the testing process is as follows

One: Place the image you want to test (default JPG format) under the image of the project directory. 2. Run the following test command (you can modify the parameter Settings in argument.yml) :

python main.py --config argument.yml
Copy the code

Three: view the running effect

Test images generated:

Zoom-in (zoom in), dolly zoom-in (slide zoom in), Swing (swing) and circle (circle)mp4Video format;

Example generation:

  • The original:

  • The effect is as follows:


📙 Test (Version 2, 2021)


There was a trial run of this version because the author made a major update to the codebase in June 2021

Dear friends, I tested this blog last year. At that time, it was really easy to test successfully and get output from running


In 2021, the testing process is as follows, and the bugs encountered are still not completely solved

Download the model


Many small partner server, model download is slow, you can copy the link to the browser download

chmod 755 download.sh 

./download.sh 

Copy the code

Look at the download.sh script, which downloads the model and guides it to the appropriate path

#! /bin/sh
fb_status=$(wget --spider -S https://filebox.ece.vt.edu/ 2>&1 | grep  "HTTP / 1.1 200 OK")

mkdir checkpoints

echo "downloading from filebox ..."
wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/color-model.pth
wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/depth-model.pth
wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/edge-model.pth
wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/model.pt

mv color-model.pth checkpoints/.
mv depth-model.pth checkpoints/.
mv edge-model.pth checkpoints/.
mv model.pt MiDaS/.

## For example, the following is a later update by the author

echo "cloning from BoostingMonocularDepth ..."
git clone https://github.com/compphoto/BoostingMonocularDepth.git
mkdir -p BoostingMonocularDepth/pix2pix/checkpoints/mergemodel/

echo "downloading mergenet weights ..."
wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/latest_net_G.pth
mv latest_net_G.pth BoostingMonocularDepth/pix2pix/checkpoints/mergemodel/
wget https://github.com/intel-isl/MiDaS/releases/download/v2/model-f46da743.pt
mv model-f46da743.pt BoostingMonocularDepth/midas/model.pt

Copy the code

At this point, after the download.sh script is run, the project structure is as follows

Tree 3D-Photoinpainting / 3D-Photoinpainting/School Exercises - Argument. Yml School exercises - Bilateral_filtering. Py School exercises - BoostingMonocularDepth │ ├ ─ ─ Boostmonoculardepth. Ipynb │ ├ ─ ─ dataset_prepare │ │ ├ ─ ─ create_crops. M │ │ ├ ─ ─ generatecrops. M │ │ ├ ─ ─ Ibims1_prepare. M │ │ ├ ─ ─ ibims1_selected. Mat │ │ ├ ─ ─ mergenet_dataset_prepare. Md │ │ └ ─ ─ Midas │ │ ├ ─ ─ models │ │ │ ├ ─ ─ Base_model. Py │ │ │ ├ ─ ─ blocks. Py │ │ │ ├ ─ ─ midas_net. Py │ │ │ └ ─ ─ transforms. Py │ │ ├ ─ ─ run. Py │ │ └ ─ ─ utils. Py │ ├ ─ ─ .py │ ├─ Evaluation │ ├─ Evaluation │ ├─ evaluation │ ├─ evaluateDataset └ ─ ─ word. M │ ├ ─ ─ figures │ │ ├ ─ ─ explanation. The PNG │ │ ├ ─ ─ lunch_edited. PNG │ │ ├ ─ ─ lunch_orig. PNG │ │ ├ ─ ─ lunch_rgb. JPG │ │ ├ ─ ─ the merge. PNG │ │ ├ ─ ─ observation. The PNG │ │ ├ ─ ─ the overview. PNG │ │ ├ ─ ─ patchexpand. GIF │ │ ├ ─ ─ patchmerge. GIF │ │ ├ ─ ─ Patchselection. GIF │ │ ├ ─ ─ ressearch. PNG │ │ ├ ─ ─ sample2_leres. JPG │ │ ├ ─ ─ sample2_midas. PNG │ │ └ ─ ─ video_thumbnail. JPG │ ├ ─ ─ inputs │ │ └ ─ ─ moon. JPG │ ├ ─ ─ lib │ │ ├ ─ ─ just set py │ │ ├ ─ ─ multi_depth_model_woauxi. Py │ │ ├ ─ ─ net_tools. Py │ │ ├ ─ ─ network_auxi. Py │ │ ├ ─ ─ __pycache__ │ │ │ ├ ─ ─ just set retaining - 36. Pyc │ │ │ ├ ─ ─ Multi_depth_model_woauxi. Retaining - 36. Pyc │ │ │ ├ ─ ─ net_tools. Retaining - 36. Pyc │ │ │ ├ ─ ─ network_auxi. Retaining - 36. Pyc │ │ │ ├ ─ ─ Resnet. Retaining - 36. Pyc │ │ │ └ ─ ─ Resnext_torch. Retaining - 36. Pyc │ │ ├ ─ ─ Resnet. Py │ │ ├ ─ ─ Resnext_torch. Py │ │ ├ ─ ─ Spvcnn_classsification. Py │ │ ├ ─ ─ spvcnn_utils. Py │ │ └ ─ ─ test_utils. Py │ ├ ─ ─ LICENSE │ ├ ─ ─ Midas │ │ ├ ─ ─ model. The pt │ │ ├ ─ ─ models │ │ │ ├ ─ ─ base_model. Py │ │ │ ├ ─ ─ blocks. Py │ │ │ ├ ─ ─ midas_net. Py │ │ │ ├ ─ ─ __pycache__ │ │ │ │ ├ ─ ─ Base_model. Retaining - 36. Pyc │ │ │ │ ├ ─ ─ blocks, retaining - 36. Pyc │ │ │ │ ├ ─ ─ midas_net. Retaining - 36. Pyc │ │ │ │ └ ─ ─ Transforms. Retaining - 36. Pyc │ │ │ └ ─ ─ transforms. Py │ │ ├ ─ ─ __pycache__ │ │ │ └ ─ ─ utils. Retaining - 36. Pyc │ │ └ ─ ─ utils. Py │ ├ ─ ─ outputs │ │ └ ─ ─ moon. PNG │ ├ ─ ─ pix2pix │ │ ├ ─ ─ checkpoints │ │ │ ├ ─ ─ mergemodel │ │ │ │ └ ─ ─ latest_net_G. PTH │ │ │ └ ─ ─ void │ │ │ └ ─ ─ test_opt. TXT │ │ ├ ─ ─ data │ │ │ ├ ─ ─ base_dataset. Py │ │ │ ├ ─ ─ depthmerge_dataset. Py │ │ │ ├ ─ ─ Image_folder. Py │ │ │ ├ ─ ─ just set py │ │ │ └ ─ ─ __pycache__ │ │ │ ├ ─ ─ base_dataset. Retaining - 36. Pyc │ │ │ ├ ─ ─ Depthmerge_dataset. Retaining - 36. Pyc │ │ │ ├ ─ ─ image_folder. Retaining - 36. Pyc │ │ │ └ ─ ─ just set retaining - 36. Pyc │ │ ├ ─ ─ models │ │ │ ├ ─ ─ base_model_hg. Py │ │ │ ├ ─ ─ base_model. Py │ │ │ ├ ─ ─ just set py │ │ │ ├ ─ ─ networks. Py │ │ │ ├ ─ ─ Pix2pix4depth_model. Py │ │ │ └ ─ ─ __pycache__ │ │ │ ├ ─ ─ base_model. Retaining - 36. Pyc │ │ │ ├ ─ ─ just set retaining - 36. Pyc │ │ │ ├ ─ ─ networks. Retaining - 36. Pyc │ │ │ └ ─ ─ pix2pix4depth_model. Retaining - 36. Pyc │ │ ├ ─ ─ the options │ │ │ ├ ─ ─ base_options. Py │ │ │ ├ ─ ─ just set py │ │ │ ├ ─ ─ __pycache__ │ │ │ │ ├ ─ ─ base_options. Retaining - 36. Pyc │ │ │ │ ├ ─ ─ just set retaining - 36. Pyc │ │ │ │ └ ─ ─ test_options. Retaining - 36. Pyc │ │ │ ├ ─ ─ test_options. Py │ │ │ └ ─ ─ train_options. Py │ │ ├ ─ ─ test. Py │ │ ├ ─ ─ "Train". Py │ │ └ ─ ─ util │ │ ├ ─ ─ get_data. Py │ │ ├ ─ ─ guidedfilter. Py │ │ ├ ─ ─ HTML. Py │ │ ├ ─ ─ image_pool. Py │ │ ├ ─ ─ Just set py │ │ ├ ─ ─ __pycache__ │ │ │ ├ ─ ─ guidedfilter. Retaining - 36. Pyc │ │ │ ├ ─ ─ just set retaining - 36. Pyc │ │ │ └ ─ ─ Util. Retaining - 36. Pyc │ │ ├ ─ ─ util. Py │ │ └ ─ ─ visualizer. Py │ ├ ─ ─ __pycache__ │ │ └ ─ ─ utils. Retaining - 36. Pyc │ ├ ─ ─ the README, md │ ├ ─ ─ requirements. TXT │ ├ ─ ─ run. Py │ ├ ─ ─ structuredrl │ │ └ ─ ─ models │ │ ├ ─ ─ DepthNet. Py │ │ ├ ─ ─ networks. Py │ │ ├ ─ ─ Resnet. Py │ │ └ ─ ─ syncbn │ │ ├ ─ ─ LICENSE │ │ ├ ─ ─ make_ext. Sh │ │ ├ ─ ─ modules │ │ │ ├ ─ ─ the functional │ │ │ │ ├ ─ ─ Just set py │ │ │ │ ├ ─ ─ _syncbn │ │ │ │ │ ├ ─ ─ build. Py │ │ │ │ │ ├ ─ ─ _ext │ │ │ │ │ │ ├ ─ ─ just set py │ │ │ │ │ │ └ ─ ─ Syncbn │ │ │ │ │ │ └ ─ ─ just set py │ │ │ │ │ ├ ─ ─ just set py │ │ │ │ │ └ ─ ─ the SRC │ │ │ │ │ ├ ─ ─ common. H │ │ │ │ │ ├ ─ ─ Syncbn. CPP │ │ │ │ │ ├ ─ ─ syncbn. Cu │ │ │ │ │ ├ ─ ─ syncbn. Cu. H │ │ │ │ │ ├ ─ ─ syncbn. Cu. O │ │ │ │ │ └ ─ ─ syncbn. H │ │ │ │ └ ─ ─ syncbn. Py │ │ │ ├ ─ ─ just set py │ │ │ └ ─ ─ nn │ │ │ ├ ─ ─ just set py │ │ │ └ ─ ─ syncbn. Py │ │ ├ ─ ─ the README. Md │ │ ├ ─ ─ Requirements. TXT │ │ └ ─ ─ test. Py │ └ ─ ─ utils. Py ├ ─ ─ boostmonodepth_utils. Py ├ ─ ─ checkpoints │ ├ ─ ─ color - model. PTH │ ├ ─ ─ The depth - model. PTH │ └ ─ ─ edge - model. PTH ├ ─ ─ the depth │ ├ ─ ─ moon. Npy │ └ ─ ─ moon. PNG ├ ─ ─ DOCUMENTATION. The md ├ ─ ─ the download. Sh ├ ─ ─ Image │ └ ─ ─ moon. JPG ├ ─ ─ LICENSE ├ ─ ─ main. Py ├ ─ ─ mesh ├ ─ ─ mesh. Py ├ ─ ─ mesh_tools. Py ├ ─ ─ MiDaS │ ├ ─ ─ MiDaS_utils. Py │ ├ ─ ─ Model. The pt │ ├ ─ ─ monodepth_net. Py │ ├ ─ ─ __pycache__ │ │ ├ ─ ─ MiDaS_utils. Retaining - 36. Pyc │ │ ├ ─ ─ Monodepth_net. Retaining - 36. Pyc │ │ └ ─ ─ the run, retaining - 36. Pyc │ └ ─ ─ run. Py ├ ─ ─ model - f46da743. Pt ├ ─ ─ networks. Py ├ ─ ─ __pyCache__ │ ├── Anticlimates.filtering │ ├ ─ ─ mesh_tools. Retaining - 36. Pyc │ ├ ─ ─ networks. Retaining - 36. Pyc │ └ ─ ─ utils. Retaining - 36. Pyc ├ ─ ─ pyproject. Toml ├ ─ ─ ├── bass Exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises ├ ─ └─ moon_zoom-in. Mp4 45 directories, 50 filesCopy the code

The possible errors are summarized below

1 FileNotFoundError: No such file: ‘.. /outputs/moon.png

An error occurred when running the test for the first time

python main.py --config argument.yml running on device 0 0%| | 0/1 [00:00<?, ? It/s] Current Source = = > moon Running the depth extraction at 1638526409.2872481 BoostingMonocularDepth/inputs / *. PNG BoostingMonocularDepth/inputs/*.jpg device: cuda Namespace(Final=True, R0=False, R20=False, colorize_results=False, data_dir='inputs/', depthNet=0, max_res=inf, net_receptive_field_size=None, output_dir='outputs', output_resolution=1, pix2pixsize=1024, savepatchs=0, savewholeest=0) ... . Traceback (most recent call last): ... File"/ home/moli anaconda3 / envs torch14 / lib/python3.6 / site - packages/imageio/core/request. Py." ", line 260, in _parse_uri
    raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: '/home/moli/project/projectBy/nine/2021/3d-photo-inpainting/BoostingMonocularDepth/outputs/moon.png'

Copy the code

The solution is as follows

cp image/moon.jpg BoostingMonocularDepth/outputs/moon.png

Copy the code
  • And comment out the following two lines of code that empty the directory

2 RuntimeError: CUDA out of memory

RuntimeError: CUDA out of memory. Tried to allocate 1.17 GiB (GPU 0; 15.75 GiB total capacity; 2.56 GiB already allocated. 286.31 the MiB free; 2.59 GiB reservedin total by PyTorch)
  0%|                                                                                                                                                                                                                            | 0/1 [00:22<?, ?it/s]
Traceback (most recent call last):
  File "main.py", line 76, in <module>
    vis_photos, vis_depths = sparse_bilateral_filtering(depth.copy(), image.copy(), config, num_iter=config['sparse_iter'], spdb=False)
  File "/home/moli/project/projectBy/nine/2021/3d-photo-inpainting/bilateral_filtering.py", line 31, in sparse_bilateral_filtering
    vis_image[u_over > 0] = np.array([0, 0, 0])
IndexError: boolean index did not match indexed array along dimension 2; dimension is 3 but corresponding boolean dimension is 5

Copy the code

This problem, verified, is purely the GPU was occupied by other users;


0 graphics card, idle, re-run, the error disappeared

3 [“Error 65544: b’X11: The DISPLAY environment variable is missing'”]

Next, continue running and encounter this error


I have checked some solutions on the Internet for this error, but it is not applicable to me. Blind guess is because my server does not have the display and cannot display

Start Running 3D_Photo ... Loading edge model at 1638849812.6169176 Loading depth model at 1638849815.373448 Loading RGB model at Ply Writing Depth (and Basically doing everything) at 1638849817.4631581 Writing mesh file mesh/moon.ply ... Making video at 1638850000.3338194 fov: 53.13010235415598 0% | | 0/1 [04:18 <? And? It/s] Traceback (the most recent call last) : File"main.py", line 141, in <module>
    mean_loc_depth=mean_loc_depth)
  File "/home/moli/project/projectBy/nine/2021/3d-photo-inpainting/mesh.py", line 2203, in output_3d_photo
    proj='perspective')
  File "/home/moli/project/projectBy/nine/2021/3d-photo-inpainting/mesh.py", line 2134, in__init__ self.canvas = scene.SceneCanvas(bgcolor=bgcolor, size=(canvas_size*factor, canvas_size*factor)) ... . OSError: Could not init glfw: ["Error 65544: b'X11: The DISPLAY environment variable is missing'"]

Copy the code

Error reported on github discussion: github.com/openai/mujo…

2021-12-07 updated this part of the content, if there are friends, refer to here, and smooth solution, welcome to let me know wow


📜 May encounter error summary


  • AttributeError: 'Graph' object has no attribute 'node'

  • Analysis: The cause is that NetworkX is not installed or the installed version does not match

Correct installation can be:

PIP install networkx = = 2.3Copy the code

📙 blog this run code + model sharing


There is a need for the source partner, the end of the article concerned about the public number or search concerned about the blog of the same name public number, public background, reply 20200105 can automatically obtain the blog corresponding to the running code + model sharing


This code corresponds to the ## 📙 test [Version 1 2020], which did run successfully

20200105
Copy the code

Wish scientific research smooth, happy every day


🚀🚀 Mexic AI


  • 🎉 as one of the bloggers with the most dry goods in the field of AI, ❤️ lives up to his time and qing ❤️
  • ❤️ If the article is helpful to you, like, comment encourage bloggers every minute to seriously create
  • Happy learning AI, deep learning environment building: an article to read

    • 🍊# Give me a new server, how will I arrange CUDA

    • 🍊 # Ubuntu install CUDa11.2 for current users

    • 🍊 # Linux and Windows setup PIP image source – the most practical machine learning library download acceleration setup

    • 🍊# Anaconda conda switch to domestic source, Windows and Linux configuration method, add tsinghua source —

    • 🍊 # Specify the GPU to run and train Python programs, deep learning single card, multi-card training GPU Settings

    • 🍊 # Install Pytorch and Torchvision in Cuda10.0 for Linux

    • 🍊 # Article read tensorflow-GPU installation, testing

    • 🍊 # Read SSH password login, public key authentication login

    • Install JDK 11: configure the JAVA_HOME environment variable