New Jiwon reports
Source: TensorFlow
Editors: Jin Lei, Xiao Qin, Zhang Gan
【 New Jiyuan Guide 】TensorFlow2.0 is finally out, and the Alpha version is available early. The new version features easy-to-use, extensible apis and, finally, a logo change.
TensorFlow 2.0 is finally here!
Earlier this morning, Google announced version 2.0 at the TensorFlow Dev Summit in California.
There were a few highlights:
-
TensorFlow 2.0 Alpha is now available for early access;
-
Version 2.0 features simplicity, clarity, and extensibility, greatly simplifying apis.
-
Improved TensorFlow Lite and Tensorflow.js deployment model capabilities;
TensorFlow has been downloaded more than 41 million times worldwide and has more than 1,800 community contributors.
Although the official did not reveal the situation of the Chinese community, a global map was displayed at the press conference. According to the distribution of users in the map, it can be inferred that TF China is the third largest region after the United States and Europe.
Another notable change is that since 2.0, TensorFlow’s logo has changed from a block-like shape to two separate letters “T” and “F”, perhaps meaning less redundancy and a simpler look.
Easy to use and extensible, TF2.0 ushered in a new architecture
TensorFlow has grown into one of the most popular and widely adopted machine learning platforms in the world, having been around since 2015 and celebrating its third birthday last November.
Previous feedback from developers was that they wanted TensorFlow to simplify apis, reduce redundancy, and improve documentation and examples. This 2.0 release, based on suggestions from developers, has three main features: simplicity, power, and extensibility.
Based on these three features, TensorFlow 2.0 also has a new architecture, as shown in the simplified concept diagram below:
TensorFlow 2.0 will focus on simplicity and ease of use with the following updates:
-
Easy model building using Keras and Eager Execution
-
Robust model deployment for production environments on any platform
-
Provide a powerful experimental tool for research
-
Simplify apis by cleaning up obsolete apis and reducing duplication
The following details the new features of TF2.0.
This series of updates to TensorFlow focuses on making it easier for you to use.
With the release of TensorFlow 2.0, the training process will be very concise:
The main process is: data integration and transformation → model building → training → model saving.
TensorFlow also follows the principle of “deploy anywhere”, which makes it more flexible and convenient to use:
Here are a few highlights of TensorFlow2.0 in this update:
TensorFlow Alpha
-
Easier to use: Advanced apis such as TF.keras will be easier to use; And Eager Execution will be the default. Such as:
> > > tf. The add (2, 3) < tf. The Tensor: id = 2, shape = (), dtype = int32, numpy = 5 >
Copy the code
-
Clearer: Duplicate functions are removed; The call syntax for different apis is more consistent and intuitive; Compatibility is more complete.
-
More flexible: complete low-level apis; Internal operations can be accessed in tF.raw_ops; Provides inheritable interfaces to variables, checkpoint, and layers.
Of course, TensorFlow 2.0 Alpha is easy to install with just one sentence:
pip install -U --pre tensorflow
Copy the code
Eager Execution and “@tf.function” are the focus of the update, and how they work together will be explained in more detail.
One of the most obvious changes is that TensorFlow is “Eager first,” which means the OP runs immediately after it is called. In TensorFlow 1.x, the user might compose and then execute the sections of the diagram with “tf.session.run ()”.
TensorFlow 2.0 radically simplifies the use of TensorFlow – the same great OP is now easier to understand and use.
a = tf.constant([1, 2])
b = tf.constant([3, 4])
print(a + b)
# returns: tf.Tensor([4 6], shape=(2,), dtype=int32)
Copy the code
TensorFlow 2.0 uses Keras as the core experience for developers. In 2.0, you can use Keras as usual, build the model using the Sequential API, and then use “compile” and “fit”. All of these familiar “tf.keras” examples from Tensorflow.org are available “out of the box” in 2.0.
Keras’s “fit()” applies to many situations, but developers who need more flexibility now have more options. Take a look at the following example of a custom training loop written in TensorFlow 2.0 style:
def train_one_step(model, optimizer, x, y): with tf.GradientTape() as tape: logits = model(x) loss = compute_loss(y, logits) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) compute_accuracy(y, logits) return lossdef train(model, optimizer): train_ds = mnist_dataset() step = 0 loss = 0.0 for x, y in train_ds: step += 1 loss = train_one_step(model, optimizer, x, y) if tf.equal(step % 10, 0): tf.print('Step', step, ': loss', loss, '; accuracy', compute_accuracy.result())return step, loss, accuracy
Copy the code
This example takes GradientTape in the Style of Autograd and applies your gradients manually via the optimizer. This is especially helpful when writing custom training loops with complex inner workings, such as in reinforcement learning, or when conducting research that can easily help you implement new ideas for improving the optimizer’s efficiency.
Eager Execution also helps debug and monitor running code, using Python debuggers to examine objects such as variables, layers, and gradients. In the training loop, Python statements such as “if,” “for,” and “print()” are used.
Once the code is running properly, you want to achieve chart optimization and efficiency. To do this, you can wrap “train” with the decorator “@tf.function.” Inside tF.Function, an ‘if’ or ‘for’ clause can be retrieved to run diagonally and efficiently without any special manipulation.
@tf.functiondef train(model, optimizer): Train_ds = mnist_dataset() step = 0 Loss = 0 accuracy = 0 for x, y in train_ds: # as described above, including "if" and "print()" return step
Copy the code
This code is not affected by comments, but is compiled into a chart that can run easily on a GPU, TPU, or saved to a “SavedModel” for later use.
What is particularly interesting about this pair of code is that by wrapping “train()” in “@tf.function”, “train_one_step()”, “compute_loss()” and “compute_accuracy()” are also converted automatically. Alternatively, you can encapsulate only part of the operation in @tf.function to obtain the desired behavior.
In addition, TensorFlow 2.0 fully supports Estimator.
Tensorflow.org/alpha link
https://www.tensorflow.org/alpha
Advanced API changes
TensorFlow 2.0 puts a lot of effort into the API. In this release, the advanced API will be “easy to extend” and “easy to extend” :
For example, tF.keras.Optimizer \ tf.keras.layers\ tf.keras.loss and a number of other advanced apis have been optimized for “ease of use”. Such as:
It is worth noting that the advanced API of neural network such as RNN Layers has been optimized, and users can also customize it.
Good news for developers.
upgrade
TensorFlow 2.0 will include many API changes, such as reordering parameters, renaming symbols, and changing the default values of parameters. Performing all of these changes manually is tedious and error-prone.
To simplify the change process and make the transition to TensorFlow 2.0 as smooth as possible, the TensorFlow engineering team created a utility, tf_upgrade_v2, that converts old code to a new API.
Tf_upgrade_v2 link
https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/upgrade.md
TensorFlow. Js v1.0
TensorFlow for Javascript has had 300,000 downloads and 100 contributors. Today’s version 1.0 of Tensorflow.js includes performance improvements.
Take, for example, the 9-fold increase in MobileNet V1 inference in the browser. There are also new off-the-shelf models and broader platform support for Web developers.
TensorFlow Lite is a lightweight, cross-platform solution for mobile and embedded devices.
Such a lightweight solution is necessary because machine learning is increasingly moving to end devices such as phones, cars, wearables, etc. There are many limitations to using ML on such devices, such as limited computing power, limited memory, battery limitations, etc. TensorFlow Lite can largely address these limitations.
Use case for TensorFlow Lite
An amazing fact: 2 billion mobile devices have been deployed using TensorFlow Lite.
TensorFlow Lite has many domestic and international customers…
Lin Huijie, technical director of netease Youdao, as a representative of “Why Do YOU Choose TensorFlow Lite”, was invited to the stage to introduce the achievements of Youdao’s translation application TensorFlow Lite.
TensorFlow Lite has four main themes:
-
Usability: Use it when you use it
-
Performance: Faster model execution
-
Optimization: Make your models smaller and faster
-
Documentation: There are many resources
Availability: Easy to deploy and ready to use
Save the model, convert to TF Lite, and that’s it.
The new TensorFlow Select feature makes it easier to convert models to TensorFlow Lite.
Performance: Make the model execute as fast as possible with the available hardware
How fast? MobileNet V1 training, CPU, GPU and Edge TPU reasoning speed increased 1.9 times, 7.7 times and 62 times respectively!
Optimization: Make the model smaller and faster
Optimization performance:
Documents:
One More Thing:
Pete Warden, TensorFlow Lite engineer, comes on stage and introduces a really cool gadget:
It’s a small development board called Coral, a piece of hardware for building and testing AI devices.
It is similar in principle to raspberry PI, but uses a custom Google processor designed for AI. That’s right, on this little board, you can run TensorFlow.
Warden shows a small demo:
Say a specific word and the yellow lights on a Coral board will light up.
The model is only 20KB in size and runs with less than 100KB of RAM and 80KB of Flash.
“Coral offers a complete native AI toolkit that can easily take your idea from prototype to product,” Google said.
Like Raspberry PI, expect more interesting things to come out of Coral.
$149.99, Portal:
https://coral.withgoogle.com/products/