By Wolff Dobson and Josh Gordon

TensorFlow 2.0 focuses on ease of use, providing apis for beginners and veterans alike to create machine learning models. In recent articles such as new features for TensorFlow 2.0 and standardized Keras, we’ve covered new features and the direction of the platform.

We announced TensorFlow 2.0 Alpha at the TensorFlow Developer Summit, and users can now get a head start.

Note: TensorFlow developer peak links

https://www.tensorflow.org/dev-summit

Getting started guide

The best way to get started with TensorFlow 2.0 Alpha quickly is to visit TensorFlow’s new website. You can find the Alpha tutorial and guide at tensorflow.org/alpha. TensorFlow 2.0 Alpha is automatically downloaded and installed for each tutorial in the Alpha documentation, with more to come!

Note: tensorflow.org/alpha link

https://www.tensorflow.org/alpha

We recommend that you check out the “Hello World” example for beginners and veterans, and then read guides such as Effective TensorFlow 2.0.

  • The beginner example uses the Keras Sequential API, which is the easiest way to get started with TensorFlow 2.0.

  • Veteran examples show how to write forward passes imperative, how to write custom training loops using GradientTape, and how to compile code automatically using tF.function (just one line!)

Note: Beginner sample links

https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/quickstart/beginner.ipynb

Veteran sample links

https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/quickstart/advanced.ipynb

In addition, we also provide a variety of new guides, including:

  • The Important AutoGraph guide (enables you to get the full performance and portability of your graphs without chart-level code)

  • Code Upgrade Guide (Easily convert TensorFlow 1.x code to 2.0 code with conversion scripts)

  • Other initial guidelines for Keras

Note: Guide links

https://github.com/tensorflow/docs/tree/master/site/en/r2/guide

AutoGraph link

https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/autograph.ipynb

Kera link

https://github.com/tensorflow/docs/tree/master/site/en/r2/guide/keras

If you want to see what has changed, you can also refer to the API Reference revision (the use of symbols is now greatly reduced). Note that while we are actively developing TensorFlow 2.0, the landing page of TensorFlow.org is still a 1.x document by default. If you plan to explore the API reference, be sure to select the appropriate version of TensorFlow.

Note: API reference links

https://www.tensorflow.org/versions/r2.0/api_docs/python/tf

The installation

To install Alpha, we recommend that you create a new virtual environment and use “PIP Install –upgrade –pre TensorFlow” or “TensorFlow-GPU” (CUDA 10 required). We will be updating this release fairly frequently with new features. You can also send “!” Add to the command “! PIP install –upgrade –pre tensorflow “. (All of the above tutorials and guides are automatically installed with the latest version).

Note: Colab links

https://colab.research.google.com/notebooks/welcome.ipynb#recent=true

Functions, not sessions

Here’s a closer look at how these two features work together in 2.0: Eager Execution and “@tf.function.”

One of the most obvious changes is that TensorFlow is “Eager first,” which means the OP runs immediately after it is called. In TensorFlow 1.x, you might sketch and then execute sections of the diagram through “tf.session.run ()”. TensorFlow 2.0 radically simplifies the use of TensorFlow – the same great OP is now easier to understand and use.

a = tf.constant([1, 2])

b = tf.constant([3, 4])

print(a + b)

# returns: tf.Tensor([4 6], shape=(2,), dtype=int32)

TensorFlow 2.0 uses Keras as the core experience for developers. In 2.0, you can use Keras as usual, build the model using the Sequential API, and then use “compile” and “fit”. All of these familiar “tf.keras” examples from Tensorflow.org are available “out of the box” in 2.0.

Keras’s “fit()” applies to many situations, but developers who need more flexibility now have more options. Let’s take a look at this example of a custom training loop written in TensorFlow 2.0 style:

def train_one_step(model, optimizer, x, y):

 with tf.GradientTape() as tape:

   logits = model(x)

   loss = compute_loss(y, logits)

 grads = tape.gradient(loss, model.trainable_variables)

 optimizer.apply_gradients(zip(grads, model.trainable_variables))

 compute_accuracy(y, logits)

 return loss

def train(model, optimizer):

 train_ds = mnist_dataset()

 step = 0

Loss = 0.0

 for x, y in train_ds:

   step += 1

   loss = train_one_step(model, optimizer, x, y)

   if tf.equal(step % 10, 0):

     tf.print(‘Step’, step, ‘: loss’,

               loss, ‘; accuracy’, compute_accuracy.result())

 return step, loss, accuracy

Note: This sample link

https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/autograph.ipynb

This example takes GradientTape in the Style of Autograd and applies your gradients manually through the optimizer. This is especially helpful when writing custom training loops with complex inner workings, such as in reinforcement learning, or when conducting research that can easily help you implement new ideas for improving the optimizer’s efficiency.

Eager Execution also helps debug and monitor running code, and you can use Python debuggers to examine objects such as variables, layers, and gradients. In the training loop, we use Python statements like “if”, “for”, and “print()”.

Once the code is running properly, you want to achieve chart optimization and efficiency. To do this, you can wrap “train” with the decorator “@tf.function.” Tf.function is built-in, so that you can get an “if” or “for” clause to run efficiently with a diagram without doing anything special.

@tf.function

def train(model, optimizer):

 train_ds = mnist_dataset()

 step = 0

 loss = 0

 accuracy = 0

 for x, y in train_ds:

# As described above, including “if” and “print()”

 return step

This code is not affected by comments, but we compile it into a diagram that runs easily on gpus, Tpus, or save it to a “SavedModel” for later use.

What is particularly interesting about this pair of code is that by wrapping “train()” in “@tf.function”, “train_one_step()”, “compute_loss()” and “compute_accuracy()” are also converted automatically. You can also choose to encapsulate only part of the operations in “@tf.function” to get the desired behavior.

In addition, TensorFlow 2.0 fully supports Estimator. Please refer to the new tutorial for details on promotion trees and model understanding.

Note: Tutorial links

https://github.com/tensorflow/docs/tree/master/site/en/r2/tutorials/estimators

You are cordially invited to participate in the test and provide feedback!

We would appreciate your feedback after you try out the latest version and upgrade the model! Please join the Testing @tensorFlow user group and join us for our weekly TensorFlow 2.0 support site (Tuesdays 2:00pm Pacific and Wednesdays 6:00am Beijing time).

Note: Link to testing @tensorflow user group

https://groups.google.com/a/tensorflow.org/forum/?utm_medium=email&utm_source=footer#! forum/testing

TensorFlow 2.0 supports site linking

https://docs.google.com/document/d/1i9_Ey9rYtslS6fryZ5Wm0vWWbrpScW3oh9bTRNVQ87Q/edit#heading=h.w7a5riqlaj2b

You may find errors, performance problems, etc. You are welcome to report them in the Issue Tracker marked 2.0. It would be helpful to have a minimal complete example that accurately reproduced this error.

Note: 2.0 TAB links

https://github.com/tensorflow/tensorflow/issues?q=is%3Aissue+is%3Aopen+label%3A2.0

More features, stay tuned

To keep up with known issues and development work on TensorFlow 2.0, see the TensorFlow 2.0 Project Tracker on Github. We continue to develop and improve TensorFlow 2.0, and you should see regular updates to our Nightly Build package. To be clear, this is a developer preview. Looking forward to your feedback!

Note: TensorFlow 2.0 Project Tracker link

https://github.com/orgs/tensorflow/projects/4

In addition, if you have built a good work using TensorFlow 2.0, such as a mobile application, research project, art installation, etc., please let us know, we would love to see your work. Tell us here.

Note: Link here

https://services.google.com/fb/forms/tensorflowcasestudy/

If you would like to share recently developed examples, consider submitting a PR to add to the TensorFlow organization, As part of the tensorflow/examples/community (https://github.com/tensorflow/examples/tree/master/community).

Read more about AI:

  • Upgrade your code to TensorFlow 2.0

  • To highlight! Symbolic and imperative apis in TensorFlow 2.0

  • New features in TensorFlow 2.0