# TensorFlow (Intermediate): Eager Execution & Automatic Differentiation

Welcome to Eager Execution! TensorFlow’s eager execution facilitates an imperative programming environment that allows the programmer to evaluate operations immediately, instead of first creating computational graphs to run later. In this course, we will not only get familiar with eager execution but will also look at how automatic differentiation works in TensorFlow.

Duration (mins)

Learners

#### 5.0 / 5

Rating

We will cover the following tasks in 53 minutes:

### Introduction

Normally, when you use TensorFlow to create and train machine learning models, you need to build computational graphs first required for your model training first and then run those graphs later to actually perform the computations. However, this approach is not very easy or intuitive to use. In production setting, this may not be a big problem but if you’re just researching and experimenting with your potential models, then this traditional approach can slow things down. This is where eager execution comes in.

### Eager Execution

TensorFlow’s eager execution facilitates an imperative programming environment that allows the programmer to evaluate operations immediately, instead of first creating computational graphs to run later. Let’s see how to enable this mode and also check if we are working with eager execution.

### Tensors

Tensors are multi-dimensional arrays that hold information on operations and data types that those operations are performed on. A tensor object has a data type and a shape of data that it expects. TensorFlow offers a comprehensive library of tensor operations like addition, matrix multiplication and so on. These operations automatically convert native Python types into tensors. Let’s look at a few examples in this chapter.

### NumPy Compatibility

Because both the data types and operations are inherent properties of tensors, it is not possible to mutate the tensors. While NumPy arrays are mutable, they can be easily converted to tensors and vice versa. Of course, when we convert a tensor to a NumPy array, we are only converting the resulting value of the tensor. By default, NumPy arrays are automatically converted to tensors and tensors are automatically converted to NumPy arrays. Tensors can also be explicitly converted to NumPy arrays by calling the `numpy()` method. Let’s look at some of these examples in this chapter.

### Device Placement

TensorFlow operations can be backed by either GPU or CPU. TensorFlow can automatically decide if GPU or CPU is used for the operations. Tensors produced by these operations are typically backed by the memory of the device the operation was executed on.

### Dataset from Tensors

We can use TensorFlow’s Dataset API to build pipelines to feed data to our models. We can use this to create data pipelines to feed our model’s training and evaluation loops. In eager execution mode, we don’t need to construct a TensorFlow iterator, but instead we can simply use Python’s iterator over the Dataset objects. There are also a bunch of methods to create dataset objects. You can create the dataset objects from tensors or by reading text or CSV files.

### Automatic Differentiation

Automatic differentiation is a way to programmatically compute derivative of a function. This process involves applying chain rule repeatedly to simpler operations. TensorFlow records all operations executed inside a Gradient Tape. In order to compute gradients associated with each recorded operation, gradients of all operations leading up to required values are computed using automatic differentiation.

### Control Flow

Python’s control flow is handled naturally by the gradient tapes. Let’s take a look at it in this chapter.

With Gradient Tapes, higher order gradients are actually quite easy to compute. If gradients are computed within a gradient tape context, those computations can be stored in a gradient tape as well. This way, higher order gradients can be computed. Let’s take a look!

## Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

I am a Software Engineer with many years of experience in writing commercial software. My current areas of interest include computer vision and sequence modelling for automated signal processing using deep learning as well as developing chatbots.

##### How is this different from YouTube, PluralSight, Udemy, etc.?
In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Amit) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
##### Can I buy Rhyme sessions for my company or learning institution?
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select sessions and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
##### I have a different question
Please email us at help@rhyme.com and we'll respond to you within one business day.

## More Projects by Amit

21 minutes
27 minutes
###### Computer Vision with TensorFlow: Deploy Your Model
1 hour and 1 minute
36 minutes
###### TensorFlow (Advanced): Neural Style Transfer
1 hour and 5 minutes
###### TensorFlow (Intermediate): Neural Networks with TensorFlow Core
1 hour and 12 minutes
22 minutes