# TensorFlow (Intermediate): Linear Regression

In this course, we are going to solve a linear regression problem with the help of TensorFlow and in the eager execution mode of TensorFlow. Let’s say we have an equation like this: `y = Wx + b`

. And let’s say we have a lot of values for `y`

and `x`

. Given that, can we find out the values for `W`

and `b`

? This is a basic machine learning problem and this is pretty much the fundamental concept of how we make machines learn. Essentially, we just make machines find the optimal values for parameters (like `W`

and `b`

in this case), given an assumed model and a bunch of data. So, we try to find the values for parameters which will best fit the data that we have.

First 2 tasks free. Then, decide to pay $9.99 for the rest

## Task List

We will cover the following tasks in 36 minutes:

### Introduction

Normally, when you use TensorFlow to create and train machine learning models, you need to build computational graphs first required for your model training first and then run those graphs later to actually perform the computations. However, this approach is not very easy or intuitive to use. In production setting, this may not be a big problem but if you’re just researching and experimenting with your potential models, then this traditional approach can slow things down.

This is where eager execution comes in. TensorFlow’s eager execution facilitates an imperative programming environment that allows the programmer to evaluate operations immediately, instead of first creating computational graphs to run later.

### Gradient Descent

A good model will have values for `W`

and `b`

such that the cost on the entire set is minimized. We can say that a good indication of one single prediction being a good prediction would be that the squared difference between the prediction and the actual value for `y`

for the same `x`

is low. If this value is high, then the prediction is obviously not good.

And an indication of a good model would be if these **loss** values for **all** values for `x`

, put together, is low. This is called cost of a model. But how do we know if a cost is low or high?

We can use an algorithm called gradient descent which starts off with some arbitrary values for `W`

and `b`

and then we calculate the gradients of our cost function with respect to these values. We then take a small step in the direction of these slopes to, hopefully, move closer to a minima.

### Linear Regression Model

Based on our understanding so far, we will need to define four functions in our Linear Regression Model: the initializer where we set the initial values for model’s parameters, a cost function, a predict function and a train function.

The predict function will take a inputs and predict outputs based on latest values of parameters `W`

and `b`

. The cost function will take this prediction and the actual outputs and will calculate the cost **J** based on that.

## About the Host (Amit)

I have been writing code since 1993, when I was 11, and my first passion project started with a database management software that I wrote for a local hospital. More recently, I wrote an award winning education Chatbot for a multi-billion-revenue company. I solved a recurrent problem for my client where they wanted to make basic cyber safety and privacy education accessible for their users. This bot enabled my client to reach out to their customers with personalised and real-time education. In the last one year, I’ve continued my interest in this field by constantly learning and growing in Machine Learning, NLP and Deep Learning. I'm very excited to share my variety of experience and learnings with you with the help of Rhyme.com.