The article directories

    • Technology is introduced
      • Introduction to the
      • Technology stack
    • implementation
      • data
      • Data is read
      • Create models and train
      • Model prediction and evaluation
      • Model derivation

Technology is introduced

Introduction to the

Automatic machine learning is a method that can automatically establish machine learning model, which mainly includes three aspects: first, hyperparameter optimization; Aspect two, automatic feature engineering and machine learning algorithm automatic selection; Aspect three, neural network structure search. This paper focuses on aspect three, divine network structure search.

The first two parts of automated machine learning have one characteristic: they search for existing algorithms instead of creating new ones. Generally speaking, when machine learning experts develop machine learning applications or build machine learning models, they are unlikely to create a new algorithm from scratch. But when it comes to deep neural networks, something happens. Strictly speaking, the basic structure of neural networks is fixed and finite. But every time we recreate a model, a different combination of basic units can be seen as creating a new neural network model. Under this basic reality, a third automated machine learning technique is applied, namely neural network Architecture Search Overview (NAS), which creates a new neural network structure based on the basic neural network basic units. The end result is a very powerful neural network.

A large number of scientists have done in-depth research in this area. However, as its basic assumption is deep neural network, and the architecture search is carried out on it, so the requirements for computing power are often relatively high. At present, the most powerful algorithm that can run on a single card is ENAS, and the author of ENAS opened source his work based on ENAS, Autokeras, with the release of the paper, and anyone can download and use this library.

The bottom layer of autokeras is ENAS algorithm, but its existence can be divided into the following types according to different applications:

  • Image classification
  • Image regression
  • Text classification
  • Text regression
  • Structured data classification
  • Structured data regression we directly carry out more complex operations, using Autokeras for automatic image classification.

Technology stack

  • tensorflow
  • pathlib
  • numpy
  • autokeras

implementation

data

Data Here we use the Flower_photos dataset from Keras, which is picture data for five categories of flowers, with a total of 3675 images. You can download this dataset directly using keras’s own generation. Specific commands are as follows:

import tensorflow as tf
AUTOTUNE = tf.data.experimental.AUTOTUNE
import pathlib
import numpy as np
import os

data_dir = tf.keras.utils.get_file(origin='https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
                                         fname='flower_photos', untar=True)
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.jpg')))
print("This directory: ",data_dir," have ",image_count,"images")

CLASS_NAMES = np.array([item.name for item in data_dir.glob(The '*') ifitem.name ! ="LICENSE.txt"])
print("CLASS_NAMES.",CLASS_NAMES,",They are the names of the secondary directories")
Copy the code
This directory: / home/fonttian /. Keras datasets/flower_photos have CLASS_NAMES 3670 images:  ['roses' 'dandelion' 'daisy' 'sunflowers' 'tulips'] ,They are the names of the secondary directoriesCopy the code

Data is read

Then we convert the data reading into TFDS format, which will be much more efficient. The specific implementation is as follows:

import warnings
warnings.filterwarnings("ignore")

print("-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- parameters -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --")

BATCH_SIZE = 256
IMG_HEIGHT = 224
IMG_WIDTH = 224
STEPS_PER_EPOCH = np.ceil(image_count/BATCH_SIZE)

print("----------------- start tfds -----------------")
list_ds = tf.data.Dataset.list_files(str(data_dir/'* / *'))

def get_label(file_path) :
  # convert the path to a list of path components
  parts = tf.strings.split(file_path, os.path.sep)
  # The second to last is the class-directory
  return parts[-2] == CLASS_NAMES

def decode_img(img) :
  # convert the compressed string to a 3D uint8 tensor
  img = tf.image.decode_jpeg(img, channels=3)
  Floats in the [0,1] range. # Use 'convert_image_dtype' to convert to floats in the [0,1] range.
  img = tf.image.convert_image_dtype(img, tf.float32)
  # resize the image to the desired size.
  return tf.image.resize(img, [IMG_HEIGHT, IMG_WIDTH])

def process_path(file_path) :
  label = get_label(file_path)
  # load the raw data from the file as a string
  img = tf.io.read_file(file_path)
  img = decode_img(img)
  return img, label

# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE)
print("type(labeled_ds): ".type(labeled_ds))
Copy the code
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- parameters -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- start TFDS -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- type (labeled_ds) : <class 'tensorflow.python.data.ops.dataset_ops.ParallelMapDataset'>Copy the code

Create models and train

We then used a few simple commands to create the model, and then used the FIT method for training.

print("----------------- autokeras.fit with tfds -----------------")

import autokeras as ak
clf = ak.ImageClassifier(
    overwrite=True,
    max_trials=1)

print("type(clf) :".type(clf))

# Feed the tensorflow Dataset to the classifier.
# model = clf.fit(train_ds, epochs=10)
clf.fit(labeled_ds, epochs=10)
print("End of training")
Copy the code
----------------- autokeras.fit with tfds -----------------
type(clf) : <class 'autokeras.tasks.image.ImageClassifier'>
Copy the code

Starting new trial

Epoch 1/10 92/92 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - ETA: 0 s - loss: 1.6533 accuracy: ETA: 0.18-5 s - loss: 3.6870-accuracy: 0.18-eta: 7s-loss: 12.2507-eta: 0.187-ETA: 8S-loss: 20.0298-eta: 0.179-eta: 3.6870-accuracy: 0.18-eta: 7s-loss: 12.2507-ETA: 0.187-eta: 8S-loss: 20.0298-eta: 0.179-eta: 8S-loss: 19.0134-accuracy: 0.181-ETA: 8S-loss: 17.3803-accuracy: 0.208-ETA: 8S-loss: 15.9739-accuracy: 0.218-ETA: 8s-loss:......Copy the code

Trial complete

Trial summary

|-Trial ID: c908fe149791b23cd0f4595ec5bde856

| – Score: 1.596627950668335

|-Best step: 6

Hyperparameters:

| – classification_head_1 / dropout_rate: 0.5

|-classification_head_1/spatial_reduction_1/reduction_type: flatten

|-image_block_1/augment: False

|-image_block_1/block_type: vanilla

| – image_block_1 / conv_block_1 / dropout_rate: 0.25

|-image_block_1/conv_block_1/filters_0_0: 32

|-image_block_1/conv_block_1/filters_0_1: 64

|-image_block_1/conv_block_1/kernel_size: 3

|-image_block_1/conv_block_1/max_pooling: True

|-image_block_1/conv_block_1/num_blocks: 1

|-image_block_1/conv_block_1/num_layers: 2

|-image_block_1/conv_block_1/separable: False

|-image_block_1/normalize: True

|-optimizer: adam

INFO: tensorflow: Oracle triggered the exit 115/115 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - ETA: 0 s - loss: 1.7198 accuracy: 0.28-ETA: 6s-loss: 26.4634-accuracy: 0.250-ETA: 8s-loss: 21.2743-accuracy: 0.239-eta: 9S-loss: 20.6605-accuracy: 0.250-ETA: 10s-loss: 21.1210-accuracy: 0.21-ETA: 10s-loss: 19.7904-accuracy: 0.19-ETA: 10s-loss: 18.0614-accuracy: 0.19-ETA: 10s-loss: 16.6908-accuracy: 0.20-eta: 10s-loss: 18.0614-accuracy: 0.19-ETA: 10s-loss: 16.6908-accuracy: 0.20-eta: 10s-loss: 15.4958-accuracy: 0.21-ETA: 10s-loss: 14.4523-accuracy: 0.21...... ETA: 0s-loss: 2.7511-accuracy: 0.23-15s 134ms/ step-loss: 2.7511-accuracy: 0.2343 End of trainingCopy the code

Model prediction and evaluation

The training process of the model outputs a lot of data, but only requires a few lines of code to use. In addition, the same is true for model prediction and export. In the training process, the training process has been saved, but the export_model method is needed to output the best trained model.

print("----------------- Predict with the best model -----------------")
# Predict with the best model.
predicted_y = clf.predict(labeled_ds)
# predicted_y = clf.predict(train_ds)
# Evaluate the best model with testing data.
print(clf.evaluate(labeled_ds))
# print(clf.evaluate(train_ds))
Copy the code
----------------- Predict with the best model ----------------- WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...) .expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details. WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...) .expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics 115/115 [for the details. = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - ETA: 0 s - loss: 1.6073 accuracy: ETA: 0.28-3 s - loss: 1.6060... Loss: 1.6052-accuracy: 0.24-5s 42ms/ step-loss: 1.6052-accuracy: 0.2455 [1.6052242517471313, 0.24550408124923706]Copy the code

Model derivation

print("----------------- Export as a Keras Model -----------------" )
# Export as a Keras Model.
model = clf.export_model()

print(type(model))  # <class 'tensorflow.python.keras.engine.training.Model'>

try:
    model.save("model_autokeras", save_format="tf")
except:
    model.save("model_autokeras.h5")

print("-----------------End of the program -----------------")
Copy the code
----------------- Export as a Keras Model -----------------
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
......
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Assets written to: model_autokeras/assets
-----------------End of the program -----------------
Copy the code

The code above is an export of code, and since the underlying class library still uses TensorFlow, saving the model actually calls the model export method in TensorFlow. Therefore, if the model needs to be loaded or deployed, the method in TensorFlow can be used to save and load the model. The entire folder is shown below, and you can see that the underlying file is basically TensorFlow.