5.0 / 5
We will cover the following tasks in 53 minutes:
Normally, when you use TensorFlow to create and train machine learning models, you need to build computational graphs first required for your model training first and then run those graphs later to actually perform the computations. However, this approach is not very easy or intuitive to use. In production setting, this may not be a big problem but if you’re just researching and experimenting with your potential models, then this traditional approach can slow things down. This is where eager execution comes in.
TensorFlow’s eager execution facilitates an imperative programming environment that allows the programmer to evaluate operations immediately, instead of first creating computational graphs to run later. Let’s see how to enable this mode and also check if we are working with eager execution.
Tensors are multi-dimensional arrays that hold information on operations and data types that those operations are performed on. A tensor object has a data type and a shape of data that it expects. TensorFlow offers a comprehensive library of tensor operations like addition, matrix multiplication and so on. These operations automatically convert native Python types into tensors. Let’s look at a few examples in this chapter.
Because both the data types and operations are inherent properties of tensors, it is not possible to mutate the tensors. While NumPy arrays are mutable, they can be easily converted to tensors and vice versa. Of course, when we convert a tensor to a NumPy array, we are only converting the resulting value of the tensor.
By default, NumPy arrays are automatically converted to tensors and tensors are automatically converted to NumPy arrays. Tensors can also be explicitly converted to NumPy arrays by calling the
numpy() method. Let’s look at some of these examples in this chapter.
TensorFlow operations can be backed by either GPU or CPU. TensorFlow can automatically decide if GPU or CPU is used for the operations. Tensors produced by these operations are typically backed by the memory of the device the operation was executed on.
Dataset from Tensors
We can use TensorFlow’s Dataset API to build pipelines to feed data to our models. We can use this to create data pipelines to feed our model’s training and evaluation loops. In eager execution mode, we don’t need to construct a TensorFlow iterator, but instead we can simply use Python’s iterator over the Dataset objects. There are also a bunch of methods to create dataset objects. You can create the dataset objects from tensors or by reading text or CSV files.
Automatic differentiation is a way to programmatically compute derivative of a function. This process involves applying chain rule repeatedly to simpler operations. TensorFlow records all operations executed inside a Gradient Tape. In order to compute gradients associated with each recorded operation, gradients of all operations leading up to required values are computed using automatic differentiation.
Python’s control flow is handled naturally by the gradient tapes. Let’s take a look at it in this chapter.
Higher Order Gradients
With Gradient Tapes, higher order gradients are actually quite easy to compute. If gradients are computed within a gradient tape context, those computations can be stored in a gradient tape as well. This way, higher order gradients can be computed. Let’s take a look!
About the Host (Amit Yadav)
I am a machine learning engineer with focus in computer vision and sequence modelling for automated signal processing using deep learning techniques. My previous experiences include leading chatbot development for a large corporation.