1. Create a directory structure for your project
1.1 One Directory and Two Files
The first step is to create a folder mask for the output of the model structure, and create __init__.py and module.py files under this folder to hold the specific model export information. Of course, the name of the folder can be customized, which is similar to the package definition in Python. The rest of the related export models are also placed in this folder to facilitate the model invocation of the script.
1.2 Module File Information
Specific model information needs to be written in this script file.
First, you need to import the dependent libraries. In addition to the packages required by your model, you also need to import the following two required packages for the package output of your model:
-
import paddlehub as hub from paddlehub.module.module import runnable, moduleinfo, serving Copy the code
Secondly, you need to fill in the relevant information of the model, which can be written after the package is poured. See the meaning of each parameter:
-
@moduleinfo( name="mask", version="1.0.0", summary="This is a PaddleHub Module. Just for test.", author="beordie", author_email="".type="mask".) Copy the code
Finally, wrap the model code that needs to be packaged for output, and declare the object globally so that subsequent models can use:
-
class MaskPredict: def __init__(self, in_path = 'img', out_path = 'demo') : self.maskNet = load_model('/home/aistudio/work/mask_detector.model') self.in_path = in_path self.out_path = out_path def Iou(self, bbox1, bbox2) : # calculation Iou area1 = (bbox1[2] - bbox1[0]) * (bbox1[3] - bbox1[1]) area2 = (bbox2[2] - bbox2[0]) * (bbox2[3] - bbox2[1]) w = min(bbox1[3], bbox2[3]) - max(bbox1[1], bbox2[1]) h = min(bbox1[2], bbox2[2]) - max(bbox1[0], bbox2[0]) if w <= 0 or h <= 0: return 0 area_mid = w * h return area_mid / (area1 + area2 - area_mid) def GetFace(self) : files = os.listdir(self.in_path) face_detector = hub.Module(name="pyramidbox_lite_server") for i in range(len(files)): faces = [] preds = [] img = cv2.imread(self.in_path + '/%d.jpg' % i) result = face_detector.face_detection(images=[img]) img = img_to_array(img) data = result[0] ['data'] bbox_upgrade = [] index = [] for j in range(len(data)): left, right = int(data[j]['left']), int(data[j]['right']) top, bottom = int(data[j]['top']), int(data[j]['bottom']) bbox = (left, top, right, bottom) if right > 1600 and bottom > 1600: for k in range(len(bbox_buffer)): if Iou(bbox, bbox_buffer[k]) > 0.1 and k not in index: index.append(k) break bbox_upgrade.append((left, top, right, bottom)) else: preds.append([left, top, right, bottom]) faces.append(img[top:bottom, left:right]) bbox_buffer = bbox_upgrade.copy() if len(faces) > 0: count = 0 for face in faces: face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB) face = cv2.resize(face, (224.224)) face = img_to_array(face) face = preprocess_input(face) face = np.expand_dims(face, axis=0) (mask, withoutMask) = self.maskNet.predict(face)[0] lable = "Mask" if mask > withoutMask else "No Mask" color = (0.255.0) if lable == "Mask" else (0.0.255) lable = "{}:{:.2f}%".format(lable, max(mask, withoutMask) * 100) cv2.putText(img, lable, (preds[count][0], preds[count][1] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2) cv2.rectangle(img, (preds[count][0], preds[count][1]), (preds[count][2], preds[count][3]), color, 2) count += 1 cv2.imwrite(self.out_path + '/%d.jpg' % i, img) print('Processing {} graph'.format(i)) mask = MaskPredict() Copy the code
This completes the simple model export preparation work, of course, if you need other functions, also need to write parameters.
1.3 Command Line Invocation and Service Deployment
If you want the Module to support command-line invocation (dynamically specifying the desired parameter resolution), you need to provide a runnable interface (that is, a method decorated with @runnable) that parses the incoming parameters and makes model predictions, returning the results. If you do not need to provide the command line function, you do not need to implement this interface. When the PaddleHub is executed using the command line, it automatically finds that the Module does not support the command line mode and provides corresponding prompts.
-
Parser = argparse.ArgumentParser(description="Run the mnist_predict module.", prog='hub run mnist_predict', usage='%(prog)s', add_help=True) self.parser.add_argument('--img'.type=str, default=None.help="img to predict") Copy the code
-
@runnable def runnable(self, argvs) : args = self.parser.parse_args(argvs) self.in_path = args['img'] return self.GetFace() Copy the code
If you want the Module to support the PaddleHub Serving deployment prediction service, you need to provide a Serving modified interface that parses the incoming data and makes predictions, returning the results. If you do not need to provide the PaddleHub Serving deployment prediction service, you do not need to add the Serving modifier.
-
@serving def serving(self, img_b64) : print('serving') return json.dumps() Copy the code
2. Installation and use of models
You can install the model using the hub install file in module.py. You need to CD the model to the parent directory.
-
! hub install mask Copy the code
It can then be exported and used as a normal model.
-
import paddlehub as hub Mask = hub.Module(name="mask") Mask.GetFace() Copy the code