This is my ninth day of the August Challenge

Software and Hardware Environment

  • Ubuntu 18.04 64 – bit
  • nvidia 1070Ti
  • Anaconda with python 3.7
  • Cuda 10.1
  • Cudnn 7.6
  • Paddlepaddle 1.8.4
  • Paddledetection 0.5

Introduction to the

Quote the official project introduction

PaddleDetection is a PaddleDetection development kit designed to help developers build, train, optimize, and deploy detection models faster and better. PaddleDetection modularistically implements a variety of mainstream target detection algorithms, provides rich data enhancement policies, network module components (such as backbone networks), loss functions, and integrates model compression and cross-platform high-performance deployment capabilities. After a long time of industrial practice, PaddleDetection has a smooth and excellent use experience, and is widely used by developers in more than 10 industries, such as industrial quality inspection, remote sensing image detection, unmanned inspection, new retail, Internet, scientific research, etc.

Summed up in one sentence is very cow!

Let’s take a look at the package structure overview

Architectures Backbones Components Data Augmentation
  • Two-Stage Detection
    • Faster RCNN
    • FPN
    • Cascade-RCNN
    • Libra RCNN
    • Hybrid Task RCNN
    • PSS-Det RCNN
  • One-Stage Detection
    • RetinaNet
    • YOLOv3
    • YOLOv4
    • PP-YOLO
    • SSD
  • Anchor Free
    • CornerNet-Squeeze
    • FCOS
    • TTFNet
  • Instance Segmentation
    • Mask RCNN
    • SOLOv2
  • Face-Detction
    • FaceBoxes
    • BlazeFace
    • BlazeFace-NAS
  • ResNet(&vd)
  • ResNeXt(&vd)
  • SENet
  • Res2Net
  • HRNet
  • Hourglass
  • CBNet
  • GCNet
  • DarkNet
  • CSPDarkNet
  • VGG
  • MobileNetv1/v3
  • GhostNet
  • Efficientnet
  • Common
    • Sync-BN
    • Group Norm
    • DCNv2
    • Non-local
  • FPN
    • BiFPN
    • BFP
    • HRFPN
    • ACFPN
  • Loss
    • Smooth-L1
    • GIoU/DIoU/CIoU
    • IoUAware
  • Post-processing
    • SoftNMS
    • MatrixNMS
  • Speed
    • FP16 training
    • Multi-machine training
  • Resize
  • Flipping
  • Expand
  • Crop
  • Color Distort
  • Random Erasing
  • Mixup
  • Cutmix
  • Grid Mask
  • Auto Augment

Eye all see flower, support also too xx comprehensive. In addition to functionality, let’s look at performance

The accuracy mAP of each model structure and the representative model of backbone network on COCO data set and the predicted speed (FPS) of single card Tesla V100 are compared

All the models in the figure can be obtained from the model library at github.com/PaddlePaddl…

Install PaddlePaddle

PaddlePaddle is China’s first open source, advanced technology and fully functional industry-level deep learning platform developed by Baidu, which integrates core training and reasoning framework of deep learning, basic model library, end-to-end development kit and rich tool components.

First create a virtual environment for Python, then install the Paddlepaddle-GPU

Conda create -n ppdetection python=3.7 conda activate ppdetection PIP install paddlepaddle-gpu==1.8.4.post107 -i https://mirror.baidu.com/pypi/simpleCopy the code

If you need to use to the gpu, and install nvidia NCCL framework, it is used to carry out more cartoon), download address is developer.nvidia.com/nccl/nccl-d… , select the CUDA version to download

Use the following command to verify

(ppdetection) xugaoxiang @ 1070 ti: ~ / Works/lot/PaddleDetection - release - 0.5 $ipython Python 3.7.9 (default, Aug 31, 2020, 12:42:55) Type 'copyright', 'Credits' or 'license' for more information IPython 7.19.0 -- An enhanced Interactive Python. Type '? ' for help. In [1]: import paddle.fluid as fluid In [2]: fluid.install_check.run_check() Running Verify Fluid Program ... W1208 13:28:59.964426 13251 Device_context. cc:252] Please NOTE: Device: 0, CUDA Capability: 61, Driver API Version: 11.0, Runtime API Version: 10.0 W1208 13:09:00.090900 13251 Device_context. cc:260] Device :0, cuDNN Version: Your Paddle Fluid works well on a SINGLE GPU or CPU. Your Paddle Fluid works well on a MUTIPLE GPU or CPU Fluid is installed successfully! Let's start deep Learning with Paddle Fluid now In [3]: import Paddle In [4]: Paddle.__version__ Out[4]: '1.8.4' In [5]:Copy the code

Install other dependencies

pip install pycocotools
Copy the code

Install PaddleDetection

The latest version is 0.5. go to the directory to install the necessary dependencies

Wget https://github.com/PaddlePaddle/PaddleDetection/archive/release/0.5.zip unzip PaddleDetection - release - 0.5. Zip CD PaddleDetection-release-0.5 PIP install -r requirements.txtCopy the code

Verify that the following tests pass

(ppdetection) xugaoxiang @ 1070 ti: ~ / Works/lot/PaddleDetection - release - 0.5 $python ppdet/modeling/tests/test_architectures.py Ss/home/xugaoxiang/Works/lot/PaddleDetection - release - 0.5 / ppdet/core/workspace. Py: 118: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections. ABC 'is deprecated since Python 3.3,and in 3.9 It will stop working isinstance(merge_dct[K], Collections.mapping):.......... -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 12 tests Ran in 2.866 s OK (skipped = 2)Copy the code

Use the official pre-training model prediction picture to experience the model prediction effect quickly

# by use_gpu parameter Settings are using GPU python tools/infer py - c configs/ppyolo/ppyolo yml -o use_gpu = true weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --infer_img=demo/000000014439.jpgCopy the code

When the program is finished, an image of the same name with the predicted results is generated in the Output folder

Model training

Take the PaddleDetection apple, banana, and orange data set as an example. The download script and configuration file are provided in the official code

CD dataset/ dataset # Download the dataset, vod format python download_fruitCopy the code

The fruit data set is in VOC format, and after downloading, the directory structure looks like this

YOLOv3 is used as the training model, backbone is mobilenet_v1

python tools/train.py -c configs/yolov3_mobilenet_v1_fruit.yml --eval
Copy the code

-c specifies the training configuration file, and –eval indicates the test while training. For the parameter description in the configuration file, see github.com/PaddlePaddl… , very detailed

The visualdl command allows you to view the change curve in real time

Python tools/train.py -c configs/ yolov3_mobilenet_v1_Roadsign. yml -- eval-o use_gpu=true --use_vdl= true Visualdl --logdir vdl_dir/ Scalar / --host <host_IP> --port <port_num>Copy the code

The trained model is stored in output/yolov3_mobilenet_v1_fruit

Use trained models to make predictions

Python tools/infer. Py -c configs/ yolov3_mobilenet_v1_Fruit. Yml -o weights=output/yolov3_mobilenet_v1_fruit/best_model.pdmodel --infer_img=demo/orange_71.jpg --output_dir=outputCopy the code

The deployment of

With the PaddleServing deployment, first install the dependency packages padding-serving -client and padding-serving -server- GPU

pip install paddle-serving-client -i https://mirror.baidu.com/pypi/simple
pip install paddle-serving-server-gpu -i https://mirror.baidu.com/pypi/simple
Copy the code

Export module

python tools/export_serving_model.py -c configs/yolov3_mobilenet_v1_fruit.yml -o use_gpu=true weights=output/yolov3_mobilenet_v1_fruit/best_model.pdmodel --output_dir=./inference_model
Copy the code

The above command generates a yolov3_mobilenet_v1_fruit folder in the./inference_model folder. The directory structure looks like this

│ ├── inference_model │ ├─ ├─ inference_v1_fruit │ ├─ Heavy Metal Exercises │ ├─ Heavy Metal Exercises │ ├─ heavy Metal Exercises │ ├─ heavy metal Exercises │ ├─ ├─ Serving_server. prototxt │ ├─ ├─ serving_server. prototxt │ ├─ ├─ serving_server. ├─ ├─ │ ├─ Bass exercises Conv1_bn_mean │ │ │ ├ ─ ─ conv1_bn_offset │ │ │ ├ ─ ─ conv1_bn_scale │ │ │ ├ ─ ─...Copy the code

Prototxt in the serving_client folder specifies the model input and output information. The contents of the serving_client_conf.prototxt file are as follows:

feed_var {
  name: "image"
  alias_name: "image"
  is_lod_tensor: false
  feed_type: 1
  shape: 3
  shape: 608
  shape: 608
}
feed_var {
  name: "im_size"
  alias_name: "im_size"
  is_lod_tensor: false
  feed_type: 2
  shape: 2
}
fetch_var {
  name: "multiclass_nms_0.tmp_0"
  alias_name: "multiclass_nms_0.tmp_0"
  is_lod_tensor: true
  fetch_type: 1
  shape: -1
}

Copy the code

Next, start the PaddleServing service, which should always be on

cd inference_model/yolov3_mobilenet_v1_fruit
python -m paddle_serving_server_gpu.serve --model serving_server --port 9393 --gpu_ids 0
Copy the code

Prepare the label_list. TXT file

CD inference_model/yolov3_mobilenet_v1_fruit # copy label_list. TXT file to the current folder cp.. /.. /dataset/fruit/label_list.txt .Copy the code

To begin testing

# CD inference_model/yolov3_mobilenet_v1_fruit/ # test code test_client.py python.. /.. /deploy/serving/test_client.py .. /.. /demo/orange_71.pngCopy the code

The resources

  • Xugaoxiang.com/2019/12/08/…
  • Xugaoxiang.com/2020/09/24/…
  • Xugaoxiang.com/2019/12/13/…
  • Github.com/PaddlePaddl…
  • www.paddlepaddle.org.cn/install/qui…