Azure ML Studio: End-to-end Machine Learning Pipeline

In this project, we are going to build an end-to-end machine learning pipeline, all without writing a single line of code! This project uses Adult Census Data to train a model for predicting an individual’s income. It predicts whether an individual’s annual income is greater than or less than $50,000. The learner used in this project is a Two-Class Boosted Decision Tree. Some of the features used to train the model are Age, Education, Occupation, etc.

Once we have scored and evaluated the model on the test data, we will deploy the trained model as a Azure Machine Learning web service. We can now send new data to the web service API and receive the resulting predictions.

Join for $9.99
Azure ML Studio: End-to-end Machine Learning Pipeline

Task List


We will cover the following tasks in 58 minutes:


Introduction and Project Overview

In this task, we will create a new experiment from the Azure ML Studio dashboard. Next, we will import and take a look at the Adult Data Set before we move on to pre-processing.


Data Cleaning

Now that we have some idea about the properties of the data, we can start to get it ready for our model. First, we will clean the missing data. We are going to substitute all missing values by 0 using the Clean Missing Data module.

Our next step is to use the Select Columns in Dataset module to exclude irrelevant and redundant columns from the data. We do this to reduce the clutter doing analysis.

Once we have our final features, we use the Edit Metatdata module to convert the following columns from String types to Categorical Feature types: workclass, education, marital-status, occupation, race, sex, native country.


Accounting for Class Imbalance

Before we create our training and test sets, we have one last pre-processing step. And that’s dealing with class imbalance in our dataset. The number of people earning less than $50K/yr is more than twice of the people earning greater than $50K/yr. What we want to do is upsample the minority class. At this point, it’s very easy to fall into the trap of applying upsampling to your entire dataset. I strongly caution against doing that. The timing of upsampling can affect the generalization ability of a model. Since one of the primary goals of model validation is to estimate how it will perform on unseen data, upsampling correctly is critical. The right way is to only do it on the training data.

By upsampling only on the training data, none of the information in the validation data is being used to create synthetic observations. So these results should be generalizable. Let’s see if that’s true.

What we’re going to do now is to train two models. One model will be trained on the upsampled data, and the other with just the original pre-processed data. We can compare how both models perform and make our conclusions about the efficacy of creating synthetic observations by upsampling the minority class.


Training a Two-Class Boosted Decision Tree Model and Hyperparameter Tuning

For this experiment we will train a two-class boosted decision tree model to predict the Income class label. Hyperparameter tuning is done using the Tune Model Hyperparameters module. The parameter sweeps and training will take only 5 minutes to complete.


Scoring and Evaluating the Models

In this task, we will compare how our two models performed using the Score Model and Evaluate Model modules.

AUC stands for “area under curve”, and as it’s name implies, it refers to the amount of area under the ROC curve, which theoretically is a value between 0 and 1. As I explained, the worst possible curve in practice is a diagonal line, hence the AUC should never be lower than 0.5 (for large data sets) . Using the AUC metric you can quickly compare multiple learning models. Remember that the ROC curves of two models usually don’t cross each other, hence when comparing two models, the one with a higher AUC will be the better one regardless of the threshold setting. Compared to the statistical measures of accuracy, precision, recall and F1 score, AUC’s independence of threshold makes it uniquely qualified for model selection. On the other hand, unlike accuracy, precision, recall and F1 score, AUC does not tell us what performance to expect from the model for a given threshold setting, nor can it be used to determine the optimal value for threshold. In that regard it doesn’t take away the need for the other statistical measures.

So in short The ROC plot and the AUC are very useful for comparing and selecting the best machine learning model for a given data set. A model with an AUC score near 1, and where the ROC curve comes close to the upper left corner, has a very good performance. A model with a score near 0.5 will have a curve near the diagonal and its performance is hardly better than a random predictor.


Publishing the Trained Model as a Web Service for Inference

We are now ready to create a web service from an Azure ML prediction model. When the experiment run completes successfully, you will be guided to create a Scoring or Prediction Experiment.

Preparation for deployment is a three-step process:

  1. Remove one of the models
  2. Convert the training experiment you’ve created into a predictive experiment
  3. Deploy the predictive experiment as a web service

The prediction experiment will automatically be created for you with a click. In the prediction experiment, the learner will be replaced with a trained model that has been automatically saved for you from your training experiment.

Once your scoring experiment runs successfully, you will be guided to publish your trained model as a web service.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Snehan Kekre

About the Host (Snehan Kekre)


Snehan hosts Machine Learning and Data Sciences projects at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, studying Computer Science and Artificial Intelligence. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Snehan Kekre) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Snehan Kekre) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own projects. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Rhyme's visual instructions are somewhat helpful for reading impairments. The Rhyme interface has features like resolution and zoom that are slightly helpful for visual impairment. And, we are currently developing a close-caption functionality to help with hearing impairment. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. However, we still have a lot of work to do. If you have suggestions for accessibility, please email us at accessibility@rhyme.com
We started with windows and linux cloud desktops because they have the most flexibility in teaching any software (desktop or web). However, web applications like Salesforce can run directly through a virtual browser. And, others like Jupyter and RStudio can run on containers and be accessed by virtual browsers. We are currently working on such features where such web applications won't need to run through cloud desktops. But, the rest of the Rhyme learning, authoring, and monitoring interfaces will remain the same.
Please email us at help@rhyme.com and we'll respond to you within one business day.

Ready to join this 58 minutes session?

More Projects by Snehan Kekre