One of the major changes introduced by Tensorflow 2.0 is the adoption of the Keras API as the standard upper API for Tensorflow. As I use Keras a lot in my coding, I am pleased with this change. 4. Standardization on Keras: Guidance on high-level APIs in TensorFlow 2.0. Original address: medium.com/tensorflow/… , slightly abridged. Click to read the original article to jump to the article, need to climb the wall oh!
Keras is a very popular high-level API for building and training deep learning models. It is used in rapid prototyping, cutting edge research, and products. Although TensorFlow now supports Keras, in 2.0 we integrate Keras more closely into the TensorFlow platform.
Using Keras as a high-level API for TensorFlow makes it easier for new machine learning developers to start using TensorFlow. A single high-level API reduces clutter and allows us to focus on providing advanced functionality to researchers.
We hope you enjoy using it as much as we do!
Keras has several key advantages:
- User friendly: Keras has a simple, consistent interface optimized for common usage scenarios. It provides clear and actionable feedback on user errors, as well as easy-to-understand error messages, and often provides useful advice.
- Modularity and composabilityThe Keras model connects configurable building blocks together with few limitations. Keras parts can be reused without having to use the framework or even understand everything the framework provides. For example, you can use layers or optimizers without using KerasModelTrain.
- Easy to scale: You can write custom building blocks to express new research ideas, including new layers, loss functions, and insert your ideas here to develop state-of-the-art ideas.
- For beginners and experts: Deep learning developers come from different backgrounds and experience levels, and Keras offers a useful API, whether you’re just starting out or have years of experience.
Together, these can be used in a wide range of scenarios, from learning ML to research, to application development, to deployment, enabling easier and more efficient workflows.
First, we’ll take a few questions. Next, we’ll take a closer look at what the version of Keras that ships with TensorFlow can do.
FAQ
I thought Keras was a separate library?
First, Keras is an API specification. The reference implementation of Keras is maintained as a separate open source project and can be found at www.keras.io. The project is independent of TensorFlow and has an active community of contributors and users. TensorFlow includes a full implementation of the Keras API (in the Tf.keras module), with some tensorFlow-specific enhancements.
Is Keras just a wrapper around TensorFlow or some other library?
No, this is a common (but understandable) misconception. Keras is an API standard for defining and training machine learning models. Keras is implementation-neutral: The Keras API has implementations of TensorFlow, MXNet, TypeScript, JavaScript, CNTK, Theano, PlaidML, Scala, CoreML, and other libraries.
What is the difference between the version of Keras built into TensorFlow and the version I can find on keras.io?
TensorFlow includes an implementation of the Keras API (in the Tf.keras module) and has several TensorFlow-specific enhancements, including support for intuitive debugging and eager execution for rapid iteration, Support for TensorFlow SavedModel model exchange format, and integrated support for distributed training, including training on TPU.
Eager Execution is particularly useful when using the TF.Keras model subclass API. Inspired by Chainer, this API enables you to write mandatory forward pass-throughs of models. Tf.keras is tightly integrated into the TensorFlow ecosystem and also includes support for:
- Tf.data, which enables you to build high-performance input pipes. If you prefer, you can train the model using NumPy format data, or tF.data for scalability and performance reasons.
- A distribution policy for distributing training across a variety of computing configurations, including Gpus and TPus distributed across many computers.
- Export the model. Models created using the tf.keras API can be serialized to the TensorFlow SavedModel format and can be served using TensorFlow Serving deployment or other language bindings (Java, Go, Rust, C#, etc.).
- The exported model can be deployed on mobile and embedded devices using TensorFlow Lite and can also be used with tensorflow.js (note: you can also develop the model directly in JavaScript using the same Keras API).
- Feature columns for efficiently representing and classifying structured data.
- There’s more.
How do I install tF.keras? Do I still need to install Keras via PIP?
Tf.keras is included in TensorFlow. You do not need to install Keras separately. For example, if running in Colab Notebook:
! pip install tensorflow import tensorflow as tf Dense = tf.keras.layers.DenseCopy the code
You use tF.keras. If you’re not familiar with imports, check out some recent tutorials for examples.
You mentioned that TensorFlow offers different styles of APIS for beginners and experts. How does it look?
TensorFlow developers have a wide range of experience levels (from students learning ML for the first time to ML experts and researchers). One of TensorFlow’s strengths is that it provides multiple apis to support different workflows and goals. Again, this is the main design goal of the TensorFlow Keras integration, where users can select parts of Keras rather than the entire framework.
Sequential API
If you are a student of ML, we recommend that you start with the TF.Keras Sequential API. It is intuitive, concise and suitable for 95% of ML problems in practice. Using this API, you can write your first neural network in about 10 lines of code.
The most common way to define a model is to build a layer diagram. The simplest model type is a stack of layers. You can define such a model using the Sequential API, as follows:
Model = tf.keras.Sequential() model.add(layers.Dense(64, activation= 'relu')) model.add(layers. Activation = 'relu') model. Add (layers.Dense(10, activation= 'softmax')) Such a model canthen be compiled and trained ina few lines: Model.com from running (optimizer = "Adam", loss = 'sparse_categorical_crossentropy', the metrics = [' accuracy ']) model. The fit (x_train, y_train, epochs=5) model.evaluate(x_test, y_test)Copy the code
You can find more examples of using the Sequential API at tensorflow.org/tutorials in the “Learn and Use ML” section.
Click here for a tutorial that will take you through training your first neural network on a Fashion MNIST dataset using the Sequential API.
Functional API
Of course, a sequence model is a simple stack of layers and cannot represent arbitrary models. Using the Functional API, you can build more advanced models that allow you to define complex topologies, including multi-input and multi-output models, models with shared layers, and models with residual connections.
When building a model using the Functional API, layers are callable (on tensors) and return tensors as output. You can then use these input and output tensors to define the model. Such as:
inputs = tf.keras.Input(shape=(32,))
# A layer instance is callable on a tensor, and returns a tensor.X = layers.Dense(60, inputs) x = layers. Activation = 'relu')(X) Predictions = Layers.Dense(10, activation= 'softmax')(x)# Instantiate the model given inputs and outputs.
model = tf.keras.Model(inputs=inputs, outputs=predictions)
Copy the code
Such models can be compiled and trained using simple commands like the one above. You can learn more about the Functional API here.
Model Subclassing API
You can build fully customizable models using the Model Subclassing API, and you can force your own forward passing in this style in the body of the class method. Such as:
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
# Define your layers here.Dense_1 = layers.Dense(32, activation= 'relu') self. Dense_2 = layers. Sigmoid activation = ' ') def call (self, inputs) :# Define your forward pass here,
# using layers you previously defined in `__init__`
x = self.dense_1(inputs)
return self.dense_2(x)
Copy the code
These models are more flexible, but can be harder to debug. You can compile and train all three types of models using the simple compile and fit commands shown earlier, or you can write your own custom training loops for full control.
Such as:
model = MyModel()
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss_value = loss(logits, labels)
grads = tape.gradient(loss_value, model.variables)
optimizer.apply_gradients(zip(grads, model.variables))
Copy the code
For more examples of the Model Subclassing style, visit tensorflow.org/tutorials (see the Research and Experimentation section).
Implemented using the Model Subclassing APINeural Machine Translation with Attention
GAN implemented using the Model Subclassing API
What if my study doesn’t fit these styles?
If you find tF.keras limiting your application domain, you have a number of options. You can:
- Use tF.keras.layers separately from the KERAS model definition and write your own gradient and training code. You can use tF.keras.optimizers, TF.keras.initializers, TF.keras.losses, or tF.keras.metrics individually and independently.
- Ignore tF.Keras completely and use the low-level TensorFlow API, Python and AutoGraph to achieve your goals.
It’s entirely up to you! Note that the non-object-oriented layers in tf.layers will be deprecated and tf.contrib.* (including advanced apis such as tf.contrib.slim and tf.contrib.learn) will not be available in TF 2.0.
What happens to Estimators?
Estimators are widely used by Google and the wider TensorFlow community. Several Models have been packaged as Premade Estimators, including linear classifiers, DNN classifiers, combined DNN linear classifiers (aka Wide and Deep Models) and Gradient Vough Trees. These models have been used in production and are widely deployed, and for all these reasons the Estimator API (including Premade Estimators) will be included in TensorFlow 2.0.
For Premade Estimators users, the high profile Keras and eager Execution will have little impact. We might change the implementation of Premade Estimators, but keep the API the same. We will also work to add a version of Keras implemented by Premade Estimators, and we will extend Keras to better meet large-scale production requirements.
That said, if you are developing a custom architecture, we recommend using TF.keras to build the model instead of Estimator. If you are using an infrastructure that requires Estimators, you can use model_to_ESTIMator () to transform the model while ensuring that Keras works in the TensorFlow ecosystem.
You can still read it
- Efficient TensorFlow 2.0: Applying best practices and what changes
- Early adopters TensorFlow 2.0