Data Visualization with Plotly and Seaborn: Breast Cancer Diagnosis – Feature Selection and Classification

Producing visualizations is an important first step in exploring and analyzing real-world data sets. As such, visualization is an indispensable method in any data scientist’s toolbox. It is also a powerful tool to identify problems in analyses and for illustrating results.

In this project, we will employ the statistical data visualization library, Seaborn, to discover and explore the relationships in the Breast Cancer Wisconsin (Diagnostic) Data Set.

We will use the results from our exploratory data analysis (EDA) in the previous project, Breast Cancer Diagnosis – Exploratory Data Analysis, to:

  • Drop correlated features
  • Implement feature selection and feature extraction methods including feature selection with correlation, univariate feature selection, recursive feature elimination, recursive feature elimination with cross validation, principal component analysis (PCA) and tree based feature selection methods
  • Build a boosted decision tree classifier with XGBoost to classify tumors as either malignant or benign.

Join for $9.99
Data Visualization with Plotly and Seaborn: Breast Cancer Diagnosis – Feature Selection and  Classification

Task List


We will cover the following tasks in 47 minutes:


Importing Libraries and Data

In this task, we will briefly recap our exploratory data analysis of the Breast Cancer Wisconsin (Diagnostic) Data Set in the previous project. To summarize, we:

  • Imported the data set
  • Separated the target column from the features
  • Visualized the target distribution
  • Standardize the data
  • Explored the relationship between the features using violin plots, joint plots, pair grids, and swarm plots
  • Identified the columns to be dropped by calculating the pairwise correlation coefficients


Dropping Correlated Columns from Feature List

Using the heatmap of the correlation matrix, we were able to identify columns to be dropped. Out of a set of correlated features, we will preserve the one that best separates the data. We will identify these salient features using the violin plots and swarm plots produced in the previous project.


Classification using XGBoost (minimal feature selection)

We drop 15 columns that are correlated out of a total of 33 columns. To verify that there are no remaining correlated features, we plot another correlation matrix and inspect the Pearson correlation coefficient.

Next, we use a helper function from scikit-learn to create split our data into training and test sets. Using the default parameters, we will fit the XGBClassifier estimator to the training set and use the model to predict values in the test set.

We can evaluate the performance of our classifier using the accuracy score, f-1 score, and confusion matrix from sklearn.metrics.


Univariate Feature Selection

In univariate feature selection, we will use the SelectKBest() function. The score it returns can be used to select n_features with the highest values for the test chi-squared statistic from the data.

Recall that the chi-square test measures the dependence between stochastic variables. Using this function weeds out the features that are the most likely to be independent of class and therefore irrelevant for classification.


Recursive Feature Elimination with Cross-Validation

In this task, we will not only find the best features but also the optimal number of features needed for the best classification accuracy.


Plot CV Scores vs Number of Features Selected

We will evaluate the the optimal number of features needed for the highest classification accuracy by plotting the cross validation (CV) scores of the selected features on the y-axis against the number of selected features on the x-axis.


Feature Extraction using Principal Component Analysis

We will use principle component analysis (PCA) for feature extraction. We will first need to normalize the data for better performance.

A plot of the cumulative explained variance against the number of components will give us the percentage of variance explained by each of the selected components. This curve quantifies how much of the total variance is contained within the first N components.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Snehan Kekre

About the Host (Snehan Kekre)


Snehan hosts Machine Learning and Data Sciences projects at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, studying Computer Science and Artificial Intelligence. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Snehan Kekre) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Snehan Kekre) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own projects. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Rhyme's visual instructions are somewhat helpful for reading impairments. The Rhyme interface has features like resolution and zoom that are slightly helpful for visual impairment. And, we are currently developing a close-caption functionality to help with hearing impairment. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. However, we still have a lot of work to do. If you have suggestions for accessibility, please email us at accessibility@rhyme.com
We started with windows and linux cloud desktops because they have the most flexibility in teaching any software (desktop or web). However, web applications like Salesforce can run directly through a virtual browser. And, others like Jupyter and RStudio can run on containers and be accessed by virtual browsers. We are currently working on such features where such web applications won't need to run through cloud desktops. But, the rest of the Rhyme learning, authoring, and monitoring interfaces will remain the same.
Please email us at help@rhyme.com and we'll respond to you within one business day.

Ready to join this 47 minutes session?

More Projects by Snehan Kekre