5.0 / 5
We will cover the following tasks in 49 minutes:
We will understand the Rhyme interface and our learning environment. You will get a virtual machine, you will need Jupyter Notebook and TensorFlow for this course and both of these are already installed on your virtual machine. Jupyter Notebooks are very popular with Data Science and Machine Learning Engineers as one can write code in cells and use other cells for documentation.
Importing TensorFlow and the Dataset
We will import TensorFlow and its implementation of Keras - a high-level API to build and train deep learning models. We will also import NumPy - which is the fundamental package for scientific computing in python. The training set has 404 examples. The test set has 102 examples. Each example has 13 features.
A look at the features in our dataset which has been obtained from StatLib archives and has been used extensively in to benchmark different algorithms. We will use Pandas DataFrame to tabulate our data nicely!
Different features with very different scales aren’t very useful for machines to learn. It may not be a problem in some situations but the standard practice is to scale the features to a similar scales using some sort of normalisation on the data. Since we want the learning algorithm to be *blind*to the test data, we will use only the training data for calculating mean and standard deviation. Using the mean and standard deviation, we will normalize the features for both the training and test sets.
Creating the Model
We will use Keras’ Sequential model with two densely connected hidden layers and an output layer that returns a numeric value. We will wrap up the model building steps in a function called
build_model since we will create a second model with same architecture later on in the course. We still need to compile the model before it can be trained. We need to specify a loss function, optimizer and set of metrics that we want to evaluate this model with.
Training the Model
We will use the
fit method to do the training. This method returns a
History object that stores all the relevant information for each iteration of training. Using the stats stored in the history object, we can visualise the model’s training process in a graph. We may want to use such a visualisation to determine how long to train before the model stops making any significant progress. We will also use Keras’
EarlyStopping callback to achieve this automatic stopping.
We will use the
predict method on our trained model to get predictions. Then we will plot the predictions against the actual values. We will also plot an error histogram which shows prediction error values and the number of occurrences for those prediction errors. Based on this, we conclude that extreme prediction errors are rare and most of the times the prediction is reasonably accurate.
About the Host (Amit)
I have been writing code since 1993, when I was 11, and my first passion project started with a database management software that I wrote for a local hospital. More recently, I wrote an award winning education Chatbot for a multi-billion-revenue company. I solved a recurrent problem for my client where they wanted to make basic cyber safety and privacy education accessible for their users. This bot enabled my client to reach out to their customers with personalised and real-time education. In the last one year, I’ve continued my interest in this field by constantly learning and growing in Machine Learning, NLP and Deep Learning. I'm very excited to share my variety of experience and learnings with you with the help of Rhyme.com.