Suck the cat with code! This paper is participating in[Cat Essay Campaign]

I. Paddleseg-based cat master and child image segmentation

1. PaddleSeg profile

PaddleSeg is an end-to-end image segmentation development kit based on PaddlePaddle, which covers a large number of high quality segmentation models in different directions, including high precision and lightweight. Through the modular design, it provides two application methods of configuration-driven and API call, which helps developers to complete the whole process of image segmentation application from training to deployment more conveniently.

features

  • High-precision model: The high-precision backbone network is obtained based on the semi-supervised label knowledge distillation scheme (SSLD) training developed by Baidu itself, and 50+ high-quality pre-training model is provided in combination with cutting-edge segmentation technology. The effect is better than other open source implementations.
  • Modular design: support 15+ mainstream segmentation network, combined with modular design of different components such as data enhancement strategy, backbone network, loss function, developers can assemble a variety of training configurations based on actual application scenarios, to meet different performance and accuracy requirements.
  • High performance: support multi-process asynchronous I/O, multi-card parallel training, evaluation and other acceleration strategies, combined with the video memory optimization function of the flying-blade core framework, can greatly reduce the training cost of segmentation model, so that developers can complete image segmentation training at a lower cost and more efficiently.
! git clone https://gitee.com/paddlepaddle/PaddleSeg.git  --depth=1
Copy the code
Cloning into 'PaddleSeg'... remote: Enumerating objects: 1589, done.[K remote: Counting objects: 100% (1589/1589), done.[K remote: Compressing objects: 100% (1354/1354), done.[K remote: Total 1589 (delta 309), reused 1117 (delta 142), pack-reused 0[K Receiving objects: 100% (1589/1589) and 88.49 MiB | 5.57 MiB/s, done. Resolving deltas: 100% (309/309), done. Checking connectivity... done.Copy the code

2. Data set making

Unzip the dataset! mkdir dataset ! tar -xvf data/data50154/images.tar.gz -C dataset/ ! tar -xvf data/data50154/annotations.tar.gz -C dataset/Copy the code
# delete the list. TXT header
import pandas as pd
import shutil
import os


# Image CLASS-ID SPECIES BREED ID
# ID: 1:37 Class ids
# SPECIES: 1:Cat 2:Dog
# BREED ID: 1-25:Cat 1:12:Dog
# All images with 1st letter as captial are cat images
# images with small first letter are dog images
# ._Abyssinian_100.png

def copyfile(animal, filename) :
    # image \ label list
    file_list = []
    image_file = filename + '.jpg'
    label_file = filename + '.png'

    if os.path.exists(os.path.join('dataset/images', image_file)):
        shutil.copy(os.path.join('dataset/images', image_file), os.path.join(f'{animal}/images', image_file))
        shutil.copy(os.path.join('dataset/annotations/trimaps', label_file),
                    os.path.join(f'{animal}/labels', label_file))
        temp = os.path.join('images/', image_file) + ' ' + os.path.join('labels/',label_file) + '\n'
        file_list.append(temp)
    with open(os.path.join(animal, animal + '.txt'), 'a') as f:
        f.writelines(file_list)


if __name__ == "__main__":

    data = pd.read_csv('dataset/annotations/list.txt', header=None, sep=' ')
    data.head()

    cat = data[data[2] = =1]
    dog = data[data[2] = =2]

    for item in cat[0]:
        copyfile('cat', item)

    for item in dog[0]:
        copyfile('dog', item)

Copy the code
Delete unnecessary data! rm dataset/ -rfCopy the code

3. Train custom data sets

3.1 File Structure

├ ─ ─ the TXT ├ ─ ─ images │ ├ ─ ─ Abyssinian_100. JPG │ ├ ─ ─ Abyssinian_101. JPG │ ├ ─ ─... ├ ─ ─ labels │ ├ ─ ─ Abyssinian_100. PNG │ ├ ─ ─ Abyssinian_101. PNG │ ├ ─ ─...Copy the code

3.2 Contents of cat.txt:

images/Abyssinian_1.jpg labels/Abyssinian_1.png
images/Abyssinian_10.jpg labels/Abyssinian_10.png
images/Abyssinian_100.jpg labels/Abyssinian_100.png
...
Copy the code

3. Data viewing

%cd ~
from PIL import Image

img=Image.open('cat/images/Abyssinian_101.jpg')
print(img)
img
Copy the code
/home/aistudio
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=450x313 at 0x7F157A098A90>
Copy the code

img=Image.open('cat/images/Abyssinian_123.jpg')
print(img)
img
Copy the code
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7F174201AC90>
Copy the code

4. Label processing

The tags are sorted from 0. Data for this project was extracted from Oxford-IIIT Pet www.robots.ox.ac.uk/~vgg/data/p… Pet data set, which is encoded from 1, so it needs to be re-encoded. The background is set to 0 and the image to 1.

import pandas as pd
import os
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image

def re_label(filename) :
    img = plt.imread(filename) * 255.0
    img_label = np.zeros((img.shape[0], img.shape[1]), np.uint8)
    for i in range(img.shape[0) :for j in range(img.shape[1]):
            value = img[i, j]
            if value == 2:
                img_label[i, j] = 1
    label0 = Image.fromarray(np.uint8(img_label))
    label0.save( filename)

data=pd.read_csv("cat/cat.txt", header=None, sep=' ') 
for item in data[1]:
    re_label(os.path.join('cat', item))
print('Done! ')    
Copy the code
Done!Copy the code

2. Data set preprocessing

import os
from sklearn.model_selection import train_test_split
import pandas as pd


def break_data(target, rate=0.2) :
    origin_dataset = pd.read_csv("cat/cat.txt", header=None, sep=' ')  # add parameter
    train_data, test_data = train_test_split(origin_dataset, test_size=rate)
    train_data,eval_data=train_test_split(train_data, test_size=rate)
    train_filename = os.path.join(target, 'train.txt')
    test_filename = os.path.join(target, 'test.txt')
    eval_filename = os.path.join(target, 'eval.txt')

    train_data.to_csv(train_filename, index=False, sep=' ',  header=None)
    test_data.to_csv(test_filename, index=False, sep=' ', header=None)
    eval_data.to_csv(eval_filename, index=False, sep=' ', header=None)

    print('train_data:'.len(train_data))
    print('test_data:'.len(test_data))
    print('eval_data:'.len(eval_data))

if __name__ == '__main__':
    break_data(target='cat', rate=0.2)
Copy the code
train_data: 1516
test_data: 475
eval_data: 380
Copy the code
# check! head cat/train.txtCopy the code
images/Bengal_173.jpg labels/Bengal_173.png
images/Siamese_179.jpg labels/Siamese_179.png
images/British_Shorthair_201.jpg labels/British_Shorthair_201.png
images/Russian_Blue_60.jpg labels/Russian_Blue_60.png
images/British_Shorthair_93.jpg labels/British_Shorthair_93.png
images/British_Shorthair_26.jpg labels/British_Shorthair_26.png
images/British_Shorthair_209.jpg labels/British_Shorthair_209.png
images/British_Shorthair_101.jpg labels/British_Shorthair_101.png
images/British_Shorthair_269.jpg labels/British_Shorthair_269.png
images/Ragdoll_59.jpg labels/Ragdoll_59.png
Copy the code

Three, configuration,

! cp PaddleSeg/configs/quick_start/bisenet_optic_disc_512x512_1k.yml ~/bisenet_optic_disc_512x512_1k.ymlCopy the code

To modify bisenet_optic_disc_512x512_1k.yml, note the following:

  • 1. Set the path of the data set
  • 2. Num_classes setting, background does not count
  • 3. Transforms Settings
  • 4. Loss Settings
batch_size: 256 iters: 500 train_dataset: type: Dataset dataset_root: /home/aistudio/cat/ train_path: /home/aistudio/cat/train.txt num_classes: 2 transforms: - type: ResizeStepScaling min_scale_factor: Max_scale_factor: 2.0 scale_step_size: 0.25 - type: RandomPaddingCrop crop_size: [224, 224] - type: Randomhorizontalflip-type: RandomDistort brightness_range: 0.4 Contrast_range: 0.4 saturation_range: 0.4 - type: Normalize mode: train val_dataset: type: Dataset dataset_root: /home/aistudio/cat/ val_path: /home/aistudio/cat/eval.txt num_classes: 2 transforms: - type: Normalize mode: val optimizer: type: sgd momentum: 0.9 weight_decay: 0.0005 lR_scheduler: type: PolynomialDecay Learning_rate: 0.05 end_LR: 0 power: 0.9 Loss: types: - type: CrossEntropyLoss coef: [1] model: type: FCN backbone: type: HRNet_W18_Small_V1 align_corners: False num_classes: 2 pretrained: NullCopy the code

Fourth, training

%cd ~/PaddleSeg/ ! python train.py --config .. /bisenet_optic_disc_512x512_1k.yml\ --do_eval \ --use_vdl \ --save_interval50 \
    --save_dir output
Copy the code
/home/aistudio/PaddleSeg
2021-11-03 00:21:10 [INFO]	
------------Environment Information-------------
platform: Linux-4.4.0-150-generic-x86_64-with-debian-stretch-sid
Python: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
Paddle compiled with cuda: True
NVCC: Cuda compilation tools, release 10.1, V10.1.243
cudnn: 7.6
GPUs used: 1
CUDA_VISIBLE_DEVICES: None
GPU: ['GPU 0: Tesla V100-SXM2-32GB']
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0
PaddlePaddle: 2.1.2
OpenCV: 4.1.1
------------------------------------------------
2021-11-03 00:21:10 [INFO]	
---------------Config Information---------------
batch_size: 256
iters: 500
loss:
  coef:
  - 1
  types:
  - ignore_index: 255
    type: CrossEntropyLoss
lr_scheduler:
  end_lr: 0
  learning_rate: 0.05
  power: 0.9
  type: PolynomialDecay
model:
  backbone:
    align_corners: false
    type: HRNet_W18_Small_V1
  num_classes: 2
  pretrained: null
  type: FCN
optimizer:
  momentum: 0.9
  type: sgd
  weight_decay: 0.0005
train_dataset:
  dataset_root: /home/aistudio/cat/
  mode: train
  num_classes: 2
  train_path: /home/aistudio/cat/train.txt
  transforms:
  - max_scale_factor: 2.0
    min_scale_factor: 0.5
    scale_step_size: 0.25
    type: ResizeStepScaling
  - crop_size:
    - 224
    - 224
    type: RandomPaddingCrop
  - type: RandomHorizontalFlip
  - brightness_range: 0.4
    contrast_range: 0.4
    saturation_range: 0.4
    type: RandomDistort
  - type: Normalize
  type: Dataset
val_dataset:
  dataset_root: /home/aistudio/cat/
  mode: val
  num_classes: 2
  transforms:
  - type: Normalize
  type: Dataset
  val_path: /home/aistudio/cat/eval.txt
------------------------------------------------
W1103 00:21:10.323276  1991 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W1103 00:21:10.323315  1991 device_context.cc:422] device: 0, cuDNN Version: 7.6.
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:641: UserWarning: When training, we now always track global mean and variance.
  "When training, we now always track global mean and variance.")
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:239: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.float32, but right dtype is paddle.int64, the right dtype will convert to paddle.float32
  format(lhs_dtype, rhs_dtype, lhs_dtype))
2021-11-03 00:22:00 [INFO]	[TRAIN] epoch: 2, iter: 10/500, loss: 0.6637, lr: 0.049189, batch_cost: 4.0507, reader_cost: 3.05866, ips: 63.1989 samples/sec | ETA 00:33:04
2021-11-03 00:22:39 [INFO]	[TRAIN] epoch: 4, iter: 20/500, loss: 0.6265, lr: 0.048287, batch_cost: 3.9186, reader_cost: 2.92979, ips: 65.3300 samples/sec | ETA 00:31:20
2021-11-03 00:23:19 [INFO]	[TRAIN] epoch: 6, iter: 30/500, loss: 0.6230, lr: 0.047382, batch_cost: 3.9892, reader_cost: 2.99757, ips: 64.1727 samples/sec | ETA 00:31:14
2021-11-03 00:24:01 [INFO]	[TRAIN] epoch: 8, iter: 40/500, loss: 0.6111, lr: 0.046476, batch_cost: 4.1665, reader_cost: 3.17214, ips: 61.4422 samples/sec | ETA 00:31:56
2021-11-03 00:24:41 [INFO]	[TRAIN] epoch: 10, iter: 50/500, loss: 0.6139, lr: 0.045568, batch_cost: 3.9749, reader_cost: 2.98335, ips: 64.4047 samples/sec | ETA 00:29:48
2021-11-03 00:24:41 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:239: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.int32, but right dtype is paddle.bool, the right dtype will convert to paddle.int32
  format(lhs_dtype, rhs_dtype, lhs_dtype))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:239: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.int64, but right dtype is paddle.bool, the right dtype will convert to paddle.int64
  format(lhs_dtype, rhs_dtype, lhs_dtype))
380/380 [==============================] - 16s 41ms/step - batch_cost: 0.0412 - reader cost: 0.001
2021-11-03 00:24:57 [INFO]	[EVAL] #Images: 380 mIoU: 0.3135 Acc: 0.5048 Kappa: 0.1320 
2021-11-03 00:24:57 [INFO]	[EVAL] Class IoU: 
[0.4423 0.1847]
2021-11-03 00:24:57 [INFO]	[EVAL] Class Acc: 
[0.449  0.8941]
2021-11-03 00:24:57 [INFO]	[EVAL] The model with the best validation mIoU (0.3135) was saved at iter 50.
2021-11-03 00:25:36 [INFO]	[TRAIN] epoch: 12, iter: 60/500, loss: 0.5969, lr: 0.044657, batch_cost: 3.9748, reader_cost: 2.99298, ips: 64.4065 samples/sec | ETA 00:29:08
2021-11-03 00:26:16 [INFO]	[TRAIN] epoch: 14, iter: 70/500, loss: 0.5787, lr: 0.043745, batch_cost: 3.9955, reader_cost: 3.00726, ips: 64.0719 samples/sec | ETA 00:28:38
2021-11-03 00:26:56 [INFO]	[TRAIN] epoch: 16, iter: 80/500, loss: 0.5651, lr: 0.042830, batch_cost: 3.9896, reader_cost: 3.00293, ips: 64.1663 samples/sec | ETA 00:27:55
2021-11-03 00:27:36 [INFO]	[TRAIN] epoch: 18, iter: 90/500, loss: 0.5672, lr: 0.041914, batch_cost: 4.0269, reader_cost: 3.04538, ips: 63.5724 samples/sec | ETA 00:27:31
2021-11-03 00:28:17 [INFO]	[TRAIN] epoch: 20, iter: 100/500, loss: 0.5724, lr: 0.040995, batch_cost: 4.0466, reader_cost: 3.06060, ips: 63.2635 samples/sec | ETA 00:26:58
2021-11-03 00:28:17 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 16s 41ms/step - batch_cost: 0.0412 - reader cost: 0.001
2021-11-03 00:28:33 [INFO]	[EVAL] #Images: 380 mIoU: 0.4029 Acc: 0.5850 Kappa: 0.2497 
2021-11-03 00:28:33 [INFO]	[EVAL] Class IoU: 
[0.4822 0.3236]
2021-11-03 00:28:33 [INFO]	[EVAL] Class Acc: 
[0.4942 0.9107]
2021-11-03 00:28:33 [INFO]	[EVAL] The model with the best validation mIoU (0.4029) was saved at iter 100.
2021-11-03 00:29:12 [INFO]	[TRAIN] epoch: 22, iter: 110/500, loss: 0.5607, lr: 0.040073, batch_cost: 3.9727, reader_cost: 2.98274, ips: 64.4393 samples/sec | ETA 00:25:49
2021-11-03 00:29:52 [INFO]	[TRAIN] epoch: 24, iter: 120/500, loss: 0.5449, lr: 0.039150, batch_cost: 3.9292, reader_cost: 2.94368, ips: 65.1530 samples/sec | ETA 00:24:53
2021-11-03 00:30:31 [INFO]	[TRAIN] epoch: 26, iter: 130/500, loss: 0.5525, lr: 0.038224, batch_cost: 3.9716, reader_cost: 2.98407, ips: 64.4574 samples/sec | ETA 00:24:29
2021-11-03 00:31:11 [INFO]	[TRAIN] epoch: 28, iter: 140/500, loss: 0.5465, lr: 0.037295, batch_cost: 3.9985, reader_cost: 3.00955, ips: 64.0240 samples/sec | ETA 00:23:59
2021-11-03 00:31:51 [INFO]	[TRAIN] epoch: 30, iter: 150/500, loss: 0.5349, lr: 0.036364, batch_cost: 3.9796, reader_cost: 2.99245, ips: 64.3279 samples/sec | ETA 00:23:12
2021-11-03 00:31:51 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 16s 43ms/step - batch_cost: 0.0424 - reader cost: 0.001
2021-11-03 00:32:07 [INFO]	[EVAL] #Images: 380 mIoU: 0.4709 Acc: 0.6444 Kappa: 0.3430 
2021-11-03 00:32:07 [INFO]	[EVAL] Class IoU: 
[0.5197 0.422 ]
2021-11-03 00:32:07 [INFO]	[EVAL] Class Acc: 
[0.535  0.9247]
2021-11-03 00:32:07 [INFO]	[EVAL] The model with the best validation mIoU (0.4709) was saved at iter 150.
2021-11-03 00:32:47 [INFO]	[TRAIN] epoch: 32, iter: 160/500, loss: 0.5217, lr: 0.035430, batch_cost: 3.9988, reader_cost: 3.01584, ips: 64.0188 samples/sec | ETA 00:22:39
2021-11-03 00:33:28 [INFO]	[TRAIN] epoch: 34, iter: 170/500, loss: 0.5302, lr: 0.034494, batch_cost: 4.0304, reader_cost: 3.03914, ips: 63.5179 samples/sec | ETA 00:22:10
2021-11-03 00:34:07 [INFO]	[TRAIN] epoch: 36, iter: 180/500, loss: 0.5218, lr: 0.033555, batch_cost: 3.9072, reader_cost: 2.91663, ips: 65.5197 samples/sec | ETA 00:20:50
2021-11-03 00:34:47 [INFO]	[TRAIN] epoch: 38, iter: 190/500, loss: 0.5239, lr: 0.032612, batch_cost: 3.9990, reader_cost: 3.00959, ips: 64.0163 samples/sec | ETA 00:20:39
2021-11-03 00:35:27 [INFO]	[TRAIN] epoch: 40, iter: 200/500, loss: 0.5226, lr: 0.031667, batch_cost: 3.9884, reader_cost: 3.00057, ips: 64.1854 samples/sec | ETA 00:19:56
2021-11-03 00:35:27 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 16s 41ms/step - batch_cost: 0.0410 - reader cost: 0.00
2021-11-03 00:35:42 [INFO]	[EVAL] #Images: 380 mIoU: 0.4626 Acc: 0.6373 Kappa: 0.3317 
2021-11-03 00:35:42 [INFO]	[EVAL] Class IoU: 
[0.515  0.4102]
2021-11-03 00:35:42 [INFO]	[EVAL] Class Acc: 
[0.5298 0.9236]
2021-11-03 00:35:42 [INFO]	[EVAL] The model with the best validation mIoU (0.4709) was saved at iter 150.
2021-11-03 00:36:22 [INFO]	[TRAIN] epoch: 42, iter: 210/500, loss: 0.4973, lr: 0.030719, batch_cost: 3.9429, reader_cost: 2.95770, ips: 64.9269 samples/sec | ETA 00:19:03
2021-11-03 00:37:01 [INFO]	[TRAIN] epoch: 44, iter: 220/500, loss: 0.5054, lr: 0.029767, batch_cost: 3.9474, reader_cost: 2.95505, ips: 64.8523 samples/sec | ETA 00:18:25
2021-11-03 00:37:41 [INFO]	[TRAIN] epoch: 46, iter: 230/500, loss: 0.5073, lr: 0.028812, batch_cost: 3.9892, reader_cost: 2.99883, ips: 64.1738 samples/sec | ETA 00:17:57
2021-11-03 00:38:21 [INFO]	[TRAIN] epoch: 48, iter: 240/500, loss: 0.5048, lr: 0.027853, batch_cost: 3.9480, reader_cost: 2.96041, ips: 64.8430 samples/sec | ETA 00:17:06
2021-11-03 00:39:01 [INFO]	[TRAIN] epoch: 50, iter: 250/500, loss: 0.4908, lr: 0.026891, batch_cost: 3.9883, reader_cost: 3.00040, ips: 64.1876 samples/sec | ETA 00:16:37
2021-11-03 00:39:01 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 16s 41ms/step - batch_cost: 0.0410 - reader cost: 0.001
2021-11-03 00:39:16 [INFO]	[EVAL] #Images: 380 mIoU: 0.4518 Acc: 0.6282 Kappa: 0.3179 
2021-11-03 00:39:16 [INFO]	[EVAL] Class IoU: 
[0.51   0.3936]
2021-11-03 00:39:16 [INFO]	[EVAL] Class Acc: 
[0.5231 0.9268]
2021-11-03 00:39:16 [INFO]	[EVAL] The model with the best validation mIoU (0.4709) was saved at iter 150.
2021-11-03 00:39:55 [INFO]	[TRAIN] epoch: 52, iter: 260/500, loss: 0.5051, lr: 0.025925, batch_cost: 3.9250, reader_cost: 2.92987, ips: 65.2232 samples/sec | ETA 00:15:41
2021-11-03 00:40:36 [INFO]	[TRAIN] epoch: 54, iter: 270/500, loss: 0.4934, lr: 0.024954, batch_cost: 4.0248, reader_cost: 3.03510, ips: 63.6059 samples/sec | ETA 00:15:25
2021-11-03 00:41:16 [INFO]	[TRAIN] epoch: 56, iter: 280/500, loss: 0.4881, lr: 0.023980, batch_cost: 3.9854, reader_cost: 2.99336, ips: 64.2341 samples/sec | ETA 00:14:36
2021-11-03 00:41:55 [INFO]	[TRAIN] epoch: 58, iter: 290/500, loss: 0.4767, lr: 0.023001, batch_cost: 3.9613, reader_cost: 2.97126, ips: 64.6258 samples/sec | ETA 00:13:51
2021-11-03 00:42:36 [INFO]	[TRAIN] epoch: 60, iter: 300/500, loss: 0.4770, lr: 0.022018, batch_cost: 4.0457, reader_cost: 3.05480, ips: 63.2773 samples/sec | ETA 00:13:29
2021-11-03 00:42:36 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 15s 41ms/step - batch_cost: 0.0405 - reader cost: 0.00
2021-11-03 00:42:51 [INFO]	[EVAL] #Images: 380 mIoU: 0.5178 Acc: 0.6839 Kappa: 0.4066 
2021-11-03 00:42:51 [INFO]	[EVAL] Class IoU: 
[0.5472 0.4884]
2021-11-03 00:42:51 [INFO]	[EVAL] Class Acc: 
[0.5666 0.9267]
2021-11-03 00:42:51 [INFO]	[EVAL] The model with the best validation mIoU (0.5178) was saved at iter 300.
2021-11-03 00:43:31 [INFO]	[TRAIN] epoch: 62, iter: 310/500, loss: 0.4763, lr: 0.021029, batch_cost: 3.9940, reader_cost: 3.00568, ips: 64.0955 samples/sec | ETA 00:12:38
2021-11-03 00:44:11 [INFO]	[TRAIN] epoch: 64, iter: 320/500, loss: 0.4769, lr: 0.020036, batch_cost: 4.0372, reader_cost: 3.04802, ips: 63.4096 samples/sec | ETA 00:12:06
2021-11-03 00:44:51 [INFO]	[TRAIN] epoch: 66, iter: 330/500, loss: 0.4745, lr: 0.019037, batch_cost: 3.9867, reader_cost: 3.00122, ips: 64.2138 samples/sec | ETA 00:11:17
2021-11-03 00:45:32 [INFO]	[TRAIN] epoch: 68, iter: 340/500, loss: 0.4768, lr: 0.018032, batch_cost: 4.0501, reader_cost: 3.06707, ips: 63.2086 samples/sec | ETA 00:10:48
2021-11-03 00:46:12 [INFO]	[TRAIN] epoch: 70, iter: 350/500, loss: 0.4729, lr: 0.017021, batch_cost: 3.9921, reader_cost: 3.00525, ips: 64.1261 samples/sec | ETA 00:09:58
2021-11-03 00:46:12 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 16s 42ms/step - batch_cost: 0.0418 - reader cost: 0.001
2021-11-03 00:46:28 [INFO]	[EVAL] #Images: 380 mIoU: 0.5345 Acc: 0.6975 Kappa: 0.4282 
2021-11-03 00:46:28 [INFO]	[EVAL] Class IoU: 
[0.5561 0.513 ]
2021-11-03 00:46:28 [INFO]	[EVAL] Class Acc: 
[0.5791 0.9217]
2021-11-03 00:46:28 [INFO]	[EVAL] The model with the best validation mIoU (0.5345) was saved at iter 350.
2021-11-03 00:47:07 [INFO]	[TRAIN] epoch: 72, iter: 360/500, loss: 0.4644, lr: 0.016003, batch_cost: 3.9630, reader_cost: 2.97853, ips: 64.5982 samples/sec | ETA 00:09:14
2021-11-03 00:47:48 [INFO]	[TRAIN] epoch: 74, iter: 370/500, loss: 0.4519, lr: 0.014978, batch_cost: 4.0846, reader_cost: 3.09954, ips: 62.6738 samples/sec | ETA 00:08:51
2021-11-03 00:48:28 [INFO]	[TRAIN] epoch: 76, iter: 380/500, loss: 0.4556, lr: 0.013945, batch_cost: 3.9984, reader_cost: 3.00991, ips: 64.0259 samples/sec | ETA 00:07:59
2021-11-03 00:49:08 [INFO]	[TRAIN] epoch: 78, iter: 390/500, loss: 0.4544, lr: 0.012903, batch_cost: 3.9866, reader_cost: 2.99738, ips: 64.2149 samples/sec | ETA 00:07:18
2021-11-03 00:49:48 [INFO]	[TRAIN] epoch: 80, iter: 400/500, loss: 0.4501, lr: 0.011852, batch_cost: 4.0086, reader_cost: 3.02056, ips: 63.8634 samples/sec | ETA 00:06:40
2021-11-03 00:49:48 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 16s 41ms/step - batch_cost: 0.0413 - reader cost: 0.001
2021-11-03 00:50:04 [INFO]	[EVAL] #Images: 380 mIoU: 0.4670 Acc: 0.6418 Kappa: 0.3409 
2021-11-03 00:50:04 [INFO]	[EVAL] Class IoU: 
[0.521  0.4131]
2021-11-03 00:50:04 [INFO]	[EVAL] Class Acc: 
[0.5326 0.9392]
2021-11-03 00:50:04 [INFO]	[EVAL] The model with the best validation mIoU (0.5345) was saved at iter 350.
2021-11-03 00:50:44 [INFO]	[TRAIN] epoch: 82, iter: 410/500, loss: 0.4472, lr: 0.010790, batch_cost: 4.0134, reader_cost: 3.02345, ips: 63.7862 samples/sec | ETA 00:06:01
2021-11-03 00:51:24 [INFO]	[TRAIN] epoch: 84, iter: 420/500, loss: 0.4489, lr: 0.009717, batch_cost: 3.9976, reader_cost: 3.01048, ips: 64.0384 samples/sec | ETA 00:05:19
2021-11-03 00:52:04 [INFO]	[TRAIN] epoch: 86, iter: 430/500, loss: 0.4547, lr: 0.008630, batch_cost: 3.9965, reader_cost: 3.01080, ips: 64.0565 samples/sec | ETA 00:04:39
2021-11-03 00:52:44 [INFO]	[TRAIN] epoch: 88, iter: 440/500, loss: 0.4307, lr: 0.007528, batch_cost: 3.9896, reader_cost: 3.00905, ips: 64.1663 samples/sec | ETA 00:03:59
2021-11-03 00:53:24 [INFO]	[TRAIN] epoch: 90, iter: 450/500, loss: 0.4432, lr: 0.006408, batch_cost: 3.9713, reader_cost: 2.98270, ips: 64.4622 samples/sec | ETA 00:03:18
2021-11-03 00:53:24 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 15s 41ms/step - batch_cost: 0.0406 - reader cost: 0.001
2021-11-03 00:53:39 [INFO]	[EVAL] #Images: 380 mIoU: 0.5603 Acc: 0.7185 Kappa: 0.4641 
2021-11-03 00:53:39 [INFO]	[EVAL] Class IoU: 
[0.5741 0.5465]
2021-11-03 00:53:39 [INFO]	[EVAL] Class Acc: 
[0.5981 0.9273]
2021-11-03 00:53:39 [INFO]	[EVAL] The model with the best validation mIoU (0.5603) was saved at iter 450.
2021-11-03 00:54:19 [INFO]	[TRAIN] epoch: 92, iter: 460/500, loss: 0.4341, lr: 0.005265, batch_cost: 3.9555, reader_cost: 2.97218, ips: 64.7195 samples/sec | ETA 00:02:38
2021-11-03 00:54:59 [INFO]	[TRAIN] epoch: 94, iter: 470/500, loss: 0.4358, lr: 0.004094, batch_cost: 3.9997, reader_cost: 3.01172, ips: 64.0056 samples/sec | ETA 00:01:59
2021-11-03 00:55:39 [INFO]	[TRAIN] epoch: 96, iter: 480/500, loss: 0.4367, lr: 0.002883, batch_cost: 4.0574, reader_cost: 3.07041, ips: 63.0940 samples/sec | ETA 00:01:21
2021-11-03 00:56:20 [INFO]	[TRAIN] epoch: 98, iter: 490/500, loss: 0.4348, lr: 0.001611, batch_cost: 4.0403, reader_cost: 3.05264, ips: 63.3610 samples/sec | ETA 00:00:40
2021-11-03 00:57:00 [INFO]	[TRAIN] epoch: 100, iter: 500/500, loss: 0.4246, lr: 0.000186, batch_cost: 4.0148, reader_cost: 3.02799, ips: 63.7634 samples/sec | ETA 00:00:00
2021-11-03 00:57:00 [INFO]	Start evaluating (total_samples: 380, total_iters: 380)...
380/380 [==============================] - 16s 41ms/step - batch_cost: 0.0409 - reader cost: 0.00
2021-11-03 00:57:15 [INFO]	[EVAL] #Images: 380 mIoU: 0.5772 Acc: 0.7321 Kappa: 0.4874 
2021-11-03 00:57:15 [INFO]	[EVAL] Class IoU: 
[0.586  0.5684]
2021-11-03 00:57:15 [INFO]	[EVAL] Class Acc: 
[0.6112 0.9296]
2021-11-03 00:57:16 [INFO]	[EVAL] The model with the best validation mIoU (0.5772) was saved at iter 500.
<class 'paddle.nn.layer.conv.Conv2D'>'s flops has been counted
<class 'paddle.nn.layer.norm.BatchNorm2D'>'s flops has been counted
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  if data.dtype == np.object:
Total Flops: 795137504     Total Params: 1543954
Copy the code

Five, the test

! python val.py \ --config /home/aistudio/bisenet_optic_disc_512x512_1k.yml\ --model_path output/best_model/model.pdparamsCopy the code
2021-11-03 00:59:45 [INFO] ---------------Config Information--------------- batch_size: 256 iters: 500 loss: coef: -1 types: - type: CrossEntropyLoss LR_scheduler: end_LR: 0 Learning_rate: 0.05 Power: 0.9 Type: theoreialdecay model: backbone: align_corners: false type: HRNet_W18_Small_V1 num_classes: 2 pretrained: null type: FCN optimizer: momentum: 0.9 type: SGD weight_decay: 0.0005 train_dataset: dataset_root: /home/aistudio/cat/ mode: train num_classes: 2 train_path: / home/aistudio/cat/train. TXT transforms: - max_scale_factor: min_scale_factor 2.0:0.5 scale_step_size: 0.25 type: ResizeStepScaling - crop_size: -224-224 type: randompaddingcorp-type: RandomHorizontalFlip - brightness_range: 0.4 Contrast_range: 0.4 saturation_range: 0.4 type: RandomDistort - type: Normalize type: Dataset val_dataset: dataset_root: /home/aistudio/cat/ mode: val num_classes: 2 transforms: - type: Normalize type: Dataset val_path: / home/aistudio/cat/eval. TXT -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- W1103 00:59:45. 931428, 9288 Device_context. cc:404] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: Cc :422] Device: 0, cuDNN Version: 2021-11-03 00:59:55 [INFO] Loading Pretrained Model from output/ Best_model/model.pdParams 2021-11-03 00:59:57 [INFO] There are 363/363 variables loaded into FCN. 2021-11-03 00:59:57 [INFO] Loaded trained params of model successfully 2021-11-03 00:59:57 [INFO] Start evaluating (total_samples: 380, total_iters: 380)... / opt/conda envs/python35 - paddle120 - env/lib/python3.7 / site - packages/paddle/fluid/dygraph/math_op_patch py: 239: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.int32, but right dtype is paddle.bool, the right dtype will convert to paddle.int32 format(lhs_dtype, rhs_dtype, lhs_dtype)) / opt/conda envs/python35 - paddle120 - env/lib/python3.7 / site - packages/paddle/fluid/dygraph/math_op_patch py: 239: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.int64, but right dtype is paddle.bool, the right dtype will convert to paddle.int64 format(lhs_dtype, rhs_dtype, 380/380 [lhs_dtype)) = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 16 s 41 ms/step - batch_cost: 0.0409 - reader cost: 0.002 2021-11-03 01:00:13 [INFO] [EVAL] #Images: 380 mIoU: 0.5772 Acc: 0.7321 Kappa: 2021-11-03 01:00:13 [INFO] [EVAL] Class IoU: [0.586 0.5684] 2021-11-03 01:00:13 [INFO] [EVAL] Class Acc: [0.6112 0.9296]Copy the code

6. Export the static model

! python export.py \ --config /home/aistudio/bisenet_optic_disc_512x512_1k.yml\ --model_path output/best_model/model.pdparamsCopy the code
W1103 01:00:57.781003  9593 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W1103 01:00:57.785893  9593 device_context.cc:422] device: 0, cuDNN Version: 7.6.
2021-11-03 01:01:09 [INFO]	Loaded trained params of model successfully.
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return (isinstance(seq, collections.Sequence) and
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpiu3mi0oc.py:28
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpn_6sfwjx.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp2eo8_6gd.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpwc3aqih6.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp3i52fv6g.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp331woke4.py:36
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp331woke4.py:58
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpt_4sfeo_.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp64kisiuw.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpv305ofe5.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpo4s_zth8.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpba82b2d2.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmps_4r1xg0.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp_572s8qy.py:36
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp_572s8qy.py:58
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpyaadsihk.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp0vdk9o9u.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpiydq81o7.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmppw5nb4yl.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmpm9qyd52p.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmprgan86ed.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp5f_8uade.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp7ye2d18e.py:27
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp_l3u6xjv.py:36
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:322: UserWarning: /tmp/tmp_l3u6xjv.py:58
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
2021-11-03 01:01:11 [INFO]	Model is saved in ./output.
Copy the code

Seven, forecasting

deploy.yaml

Deploy:
  model: model.pdmodel
  params: model.pdiparams
  transforms:
  - type: Normalize
Copy the code
! pip install -e .Copy the code
from PIL import Image
img=Image.open('/home/aistudio/cat/images/Siamese_79.jpg')
img
Copy the code

from PIL import Image
img=Image.open('/home/aistudio/PaddleSeg/output/Siamese_79.png')
img
Copy the code

! python deploy/python/infer.py --config output/deploy.yaml --image_path /home/aistudio/cat/images/Siamese_79.jpgCopy the code
Copy the code