·DNN implements Fashion MNIST data set classification
This tutorial comes from the official TensorFlow Keras tutorial
Fashion-MNIST is an image data set that replaces MNIST handwritten number set. It is offered by the research arm of Zalando, a German fashion technology company. It includes positive images of 70,000 different products in 10 categories. The size, format and training set/test set partition of fashion-MNIST are exactly the same as the original MNIST. 60000/10000 training test data division, 28×28 gray scale pictures. You can use it directly to test the performance of your machine learning and deep learning algorithms without changing any code.
- Importing related packages
# TensorFlow and tf.keras
import os
os.environ["KMP_DUPLICATE_LIB_OK"] ="TRUE"
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
Copy the code
1.12.0
Copy the code
- Import data set
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Copy the code
- Define name list (to facilitate label renaming)
class_names = ['T-shirt/top'.'Trouser'.'Pullover'.'Dress'.'Coat'.'Sandal'.'Shirt'.'Sneaker'.'Bag'.'Ankle boot']
Copy the code
train_images.shape
Copy the code
(60000, 28, 28)
Copy the code
len(train_labels)
Copy the code
60000
Copy the code
- Image data normalization (0:255-> 0:1, data volume compression, not easy to overflow, easy to train)
train_images = train_images / 255.0
test_images = test_images / 255.0
Copy the code
- Show a photo
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
Copy the code
- Display multiple photos
plt.figure(figsize=(10.10))
for i in range(25):
plt.subplot(5.5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
Copy the code
- Define the network
model = keras.Sequential([
Shape (-1,28,28)->(-1,28*28)
keras.layers.Flatten(input_shape=(28.28)),
#shape process (-1,28*28)->(-1,256)
keras.layers.Dense(256, activation=tf.nn.relu),
#shape process (-1,256)->(-1,256)
keras.layers.Dropout(0.2,noise_shape=None, seed=None),
#shape process (-1,256)->(-1,64)
keras.layers.Dense(64, activation=tf.nn.relu),
#shape ->(-1,10)
keras.layers.Dense(10, activation=tf.nn.softmax)
])
Copy the code
- Compilation model (select optimization function and loss value calculation method)
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Copy the code
- Model training
model.fit(train_images, train_labels, epochs=5)
Copy the code
Epoch 1/5 60000/60000 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 8 - s 136 us/step - loss: 0.4993 acc: 0.8249 Epoch 2/5 60000/60000 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 7 s 121 us/step - loss: 0.3819 acc: 0.8627 Epoch 3/5 60000/60000 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 8 - s 127 us/step - loss: 0.3429 acc: 0.8758 Epoch 4/5 60000/60000 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 9 s 142 us/step - loss: 0.3142 acc: 0.8857 Epoch 5/5 60000/60000 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 6 s 102 us/step - loss: 0.2972 acc: 0.8908 < tensorflow. Python. Keras. Callbacks. History at 0 xb59a50b70 >Copy the code
- Test accuracy
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
Copy the code
10000/10000 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] 0 s 45 us/step Test accuracy: 0.867Copy the code
- Model application
plt.figure()
plt.imshow(test_images[1])
plt.colorbar()
plt.grid(False)
predictions = model.predict(test_images[1:2])
Copy the code
class_names[np.argmax(predictions)]
Copy the code
'Pullover'
Copy the code