Google Developer Days (GDD) is a global event that showcases Google’s latest Developer products and platforms. It’s designed to help you quickly develop great apps, build and retain an active user base, and make the most of your tools to make more money. The 2018 Google Developers Conference was held in Shanghai on September 20 and 21. Google Developer Conference 2018 Gold Nuggets at 👉
On September 20, 2018, Laurence Moroney, Google Developer Technology Outreach Engineer, and Yizhen Fu, Software engineer at Google Brain, present at “Introduction to TensorFlow: Introduction to the Use of Machine learning techniques. This article will provide a review of the presentation.
Machine learning and traditional programming
TensorFlow is an open source software library that uses Data Flow Graphs for numerical calculations. The Nodes represent mathematical operations, and the lines in the diagram represent arrays of multidimensional data related to each other between the Nodes, the tensor. Its flexible architecture allows you to scale computing across multiple platforms, such as one or more cpus (or Gpus) on desktop computers, servers, mobile devices, and more. TensorFlow was originally developed by researchers and engineers in the Google Brain Group (part of the Google Machine Intelligence Research Institute) for machine learning and deep neural networks, but the system’s versatility makes it widely usable in other computing fields.
Laurence Moroney talks about the changes he’s experienced:
- From a programmer’s point of view, the web has changed everything. Through the web, programmers can write programs that reach hundreds of millions of people. This revolution brought new business models, such as Google, Baidu, Taobao and so on.
- Smartphones have also brought about a revolution, as have business experiences like Didi and Uber.
- We’re on the verge of the next revolution, which is machine learning.
Motion detection APP scenario
With the help of the phone’s speed sensor, we can get the current user’s speed and then use the code to determine that.
- Speed < 4 is defined as walking,
- 4 <= speed < 12 is defined as run,
- Speed > 12 is defined as cycling.
Some simple sports scenes can be detected by the above method, but complex sports scenes such as the user playing golf cannot be detected. Machine learning can help us solve this problem.
Traditional programming
Machine learning requires programmers to provide answers and data, tag answers, and use the data to work out the rules for themselves.
Machine learning
data
The label
The rules
Machine learning is about mimicking humans, using massive amounts of data and labels to come up with rules and solve problems. Making machines learn like humans is the first step in machine learning.
The stage of training
data
Answer (label)
Inference phase
data
To predict
Code Practice (by Fu Yizhen)
Relationship between numbers
X
Y
X
Y
2x – 1 = y
from tensorflow import keras
import numpy as np
model = keras.Sequential([keras.layers.Dense(units = 1, input_shape = [1])])
model.compile(optimizer = 'sgd', loss = 'mean_squared_error') xs = np. Array ([1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype =floatYs = np) array ([3.0, 1.0, 1.0, 3.0, 5.0, 7.0], dtype =float)
model.fit(xs, ys, epochs = 500)
print(model predict ([10.0]))Copy the code
The results are as follows:
18.976957
19
Identify different clothes
Fashion-MNIST is an image data set that replaces MNIST handwritten number set. It is offered by the research arm of Zalando, a German fashion technology company. It includes positive images of 70,000 different products in 10 categories. The size, format and training set/test set partition of fashion-MNIST are exactly the same as the original MNIST. 60000/10000 training test data division, 28×28 gray scale pictures.
Through the Fashion MNIST data set, our model can be trained and continuously optimized to improve the recognition accuracy.
import tensorflow as tf
from tensorflow import keras
import numpy as np
# Import the Data
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, teat_labels) = fashion_mnist.load_data()
# Normalize the data Train_images = train_images / 255.0 test_images = test_images / 255.0#Define the modelModel = keras.sequential ([keras.layers.flatten (inport_shape = (28,28))), keras.layers.dense (64, activation = tf.nn.relu), keras.layers.Dense(10, activation = tf.nn.softmax), ]) model.compile(oprimizer = tf.train.AdadeltaOptimizer(),loss ='sparse_categorical_crossentropy',metrics=['accuracy'])
#Train the model
model.fit(train_images, train_labels, epochs = 5, verbose = 2)
predictions = model.predict(test_images)
print(test_images[4560])
print(np.argmax(predictions[4560]))
Copy the code
The running results are as follows:
Under the premise of 5 iterations, the success rate of this model is 71%. Neural networks can be trained more to improve accuracy.
That is all the content of this speech, I hope it will be helpful to you. Read more Google Developer Conference 2018 tech dry goods