4.7 / 5
We will cover the following tasks in 52 minutes:
Introduction and Overview
You will be introduced to the Rhyme interface and the learning environment. You will be provided with a cloud desktop with Jupyter Notebooks and all the software you will need to complete the project. Jupyter Notebooks are very popular with Data Science and Machine Learning Engineers, as one can write code in cells and use other cells for documentation.
We will also introduce the model we will be building as well the Advertising dataset for this project.
Load the Data
In this task, we will load the very popular Advertising dataset about various costs incurred on advertising by different media such as through TV, radio, newspaper, and the sales for a particular product. Next, we will briefly explore the data to get some basic information about what we are going to be working with.
Relationship between Features and Target
It is good practice to first visualize the data before proceeding with analysis and model building. In this task, we will apply seaborn to create scatter plots of each of the three features and the target. This will allow to make a qualitative observations about the linear or non-linear relationships between the features and the target.
Multiple Linear Regression Model
We will extend the simple linear regression model to include multiple features. Our approach will give each predictor a separate slope coefficient in a single model. This way, we can avoid the drawbacks of fitting a separate simple linear model to each predictor.
In this task, we use scikit-learn’s
LinearRegression( ) estimator to calculate the multiple regression coefficient estimates when TV, radio, and newspaper advertising budgets are used to predict sales revenue. Lastly, we will compare and contrast the coefficient estimates from multiple regression to those from simple linear regression.
Do all the predictors help to explain the target, or is only a subset of the predictors useful? We will address exactly this question in this task. We will use feature selection to determine which predictors are associated with the response, so as to fit a single model involving only those features.
We will use R², the most common numerical measure of model fit and understand its limitations.
Model Evaluation Using Train/Test Split and Model Metrics
Assessing model accuracy is very similar to that of simple linear regression. Our first step will be to split the data into a training set and a testing set using the
train_test_split( ) helper function from
sklearn.metrics. Next, we will create two separate models, one of which uses all predictors, while the other excludes newspaper. We fit the training set to the estimator and make predictions on the testing set. Model fit and the accuracy of the predictions will be evaluated using R² and RMSE.
Visual assessment of our models will involve comparing the residual behaviors and the prediction errors using Yellowbrick. Yellowbrick is an open source, pure Python project that extends the scikit-learn API with visual analysis and diagnostic tools. It is commonly used inside of a Jupyter Notebook alongside pandas data frames.
Interaction Effect (Synergy) in Regression Analysis
From our previous analysis of the residuals, we concluded that we need to incorporate interaction terms due to the non-additive relationship between the features and target. A simple method to extend our model to allow for interaction effects is to include a third feature by taking the product of the other two features in our model. This feature will have its separate slope coefficient which can be interpreted as the increase in the effectiveness of radio advertising for a one unit increase in TV advertising or vice versa.
My keys in the cloud desktop don't work in chrome. Had to switch to firefox. The cloud desktop is also too small to view.
The fact that the different functions and methods of the various libraries were shown in real time was essential. Otherwise it would be barely possible to figure out where one has to use a DOT, PARENTHESES, COLON, SQUARE BRACKETS, and what not, even if one understands the theoretical concepts such as linear regression or R2 score. Thus I couldn't follow the other machine learning and deep learning courses on Coursera (Michigan, Johns Hopkins, Andrew's courses) but this was doable and enabled me to learn more about the coding aspect.
About the Host (Snehan Kekre)
Snehan hosts Machine Learning and Data Sciences projects at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, studying Computer Science and Artificial Intelligence. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.