Maskrcnn-benchmark is an open-source Facebook Benchmark algorithm project that includes algorithms for detection, segmentation, and human keypoints.

At present, many DETECTION and segmentation SOTA algorithms based on PyTorch framework are improvements of this project. For example, CVPR 2019 Oral Paper and Mask Scoring R-CNN.

Project: github.com/facebookres…

Faster R-CNN and Mask R-CNN in PyTorch 1.0

  1. How to configure the project using PIP environment;
  2. How to use MacOS CPU environment to configure dependency libraries;

Environment: MacOS + PIP + Torch + MaskrCNN-Benchmark

This series consists of two parts:

  • Chapter 1 Setting up the environment;
  • Part II Training and Validation;

configuration

Download maskrCNN-Benchmark project

git clone https://github.com/facebookresearch/maskrcnn-benchmark.git
Copy the code

Virtualenv creates a virtual environment, select PYTHon3, and activate it.

virtualenv -p python3 mlp3_venv
Copy the code

Python3.6.8+ must be used and 3.7 is recommended

Or use an existing virtual environment.

Install dependency packages:

pip install ninja yacs cython matplotlib tqdm opencv-python
Copy the code

Install PyTorch directly to the official website

pip3 install torch torchvision
Copy the code

Dependent libraries

Maskrcnn-benchmark requires two dependency libraries, CocoAPI and Apex, reference, slightly different.

Install pyCocoTools 2.0, activate the existing virtual environment, and then run:

git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python setup.py build_ext install
Copy the code

Installation package: Apex 0.1, execute:

git clone https://github.com/NVIDIA/apex.git
cd apex
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install --cpp_ext
Copy the code

For Ubuntu servers, you can use the following commands, see:

cd apex
pip install -v --no-cache-dir .
Copy the code

Note:

  • CC = clang CXX = clang++ MACOSX_DEPLOYMENT_TARGET = 10.9Is Mac specific and specifies a compiled C++ library;
  • CPU environment, do not add--cuda_ext;

Install package: maskrCNN-benchmark 0.1, run

cd maskrcnn-benchmark
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build develop
Copy the code

To predict

In ·.Torch/Models, download the model, about 458M:

wget https://dl.fbaipublicfiles.com/detectron/37697547/12_2017_baselines/e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao /output/train/keypoints_coco_2014_train%3Akeypoints_coco_2014_valminusminival/generalized_rcnn/model_final.pklCopy the code

Rename:

mv model_final.pkl _detectron_37697547_12_2017_baselines_e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao_output_train_keypoints_coco_2 014_train%3Akeypoints_coco_2014_valminusminival_generalized_rcnn_model_final.pklCopy the code

Use OpenCV to read the image, cv2.imread();

Load the e2e_keypoint_rCNn_R_50_fpn_1x_Caffe2. yaml configuration file, where:

  • Keypoint is a keypoint algorithm;
  • RCNN is a detection algorithm;
  • R_50 ResNet50;
  • FPN is a Feature Pyramid network.
  • 1x is the training program, that is, minibatch is 16. The initial LR of this program is 0.02, and attenuates *0.1 after 60K and 80K iterations, and terminates at 90K iterations.

To use CPU mode, set model. DEVICE to CPU.

Create model COCO Demo, minimum picture size 800, confidence 0.7;

Call the run_on_opencv_image interface to predict the image and generate the image with the drawn result.

The source code is as follows:

import os
import cv2
import pylab
import matplotlib.pyplot as plt

from maskrcnn_benchmark.config import cfg
from demo.predictor import COCODemo

from root_dir import DATA_DIR


def show_cv_img(img_cv):
    img = cv2.cvtColor(img_cv, cv2.COLOR_BGR2RGB)
    plt.imshow(img)
    plt.axis("off")
    fig = plt.gcf()
    fig.set_size_inches(10.8)
    pylab.show()


def main(a):
    img_path = os.path.join(DATA_DIR, 'girl_generation.jpg')

    img = cv2.imread(img_path)
    print('[Info] img size: {}'.format(img.shape))
    show_cv_img(img)

    config_file = ".. /configs/caffe2/e2e_keypoint_rcnn_R_50_FPN_1x_caffe2.yaml"

    cfg.merge_from_file(config_file)  Set the configuration file
    cfg.merge_from_list(["MODEL.DEVICE"."cpu"])  # specify CPU

    coco_demo = COCODemo(  Create a model file
        cfg,
        min_image_size=800,
        confidence_threshold=0.7,
    )

    predictions = coco_demo.run_on_opencv_image(img)
    show_cv_img(predictions)


if __name__ == '__main__':
    main()
Copy the code

Effect:


Troubleshooting

Solutions to common problems.

libomp.dylib

The following problem occurred: Libomp. dylib cannot be loaded

ImportError: Dlopen (python3.7/site-packages/ Torch / _C.cpython-37m-Darwin. So, 9): ImportError: dlopen(python3.7/ site-Packages/Torch / _C.cpython-37m-Darwin. / usr/local/opt/libomp/lib/libomp dylib Referenced from: python3.7 / site - packages/torch/lib/libshm dylib "Reason: image not foundCopy the code

Install the libomp package:

brew install libomp
Copy the code

Reference: github.com/pytorch/pyt…

torch.version.cuda.split(‘.’)

The GPU version is abnormal:

get_cuda_version
    return tuple(int(x) for x in torch.version.cuda.split('.'))
AttributeError: 'NoneType' object has no attribute 'split'
Copy the code

The environment is Mac CPU environment, there is no GPU, but in apex source code, still looking for GPU, resulting in bugs, modify the source code:

/apex/amp/lists/torch_overrides. Py line 69 add try-except statement to block exceptions:

try:
    if utils.get_cuda_version() >= (9.1.0):
        FP16_FUNCS.extend(_bmms)
    else:
        FP32_FUNCS.extend(_bmms)
except:
    FP32_FUNCS.extend(_bmms)
Copy the code

fatal error: Python.h: No such file or directory

Missing Python header files, install the development version:

sudo apt-get install python3-dev
Copy the code

Recommended to use apt-fast download, very fast.


OK, that’s all!