Machine Learning VIsualization: Poker Hand Classification using Random Forests

In this project, we’ll explore how to evaluate the performance of a random forest classifier from the scikit-learn library on the Poker Hand dataset using visual diagnostic tools from Scikit-Yellowbrick. With an emphasis on visual steering of our analysis, we will cover the following topics in our machine learning workflow:

  • Feature analysis
  • Feature importance
  • Algorithm selection
  • Model evaluation using regression
  • Cross-validation
  • Hyperparameter tuning

Start for Free
First 2 tasks free. Then, decide to pay $9.99 for the rest
Machine Learning VIsualization: Poker Hand Classification using Random Forests

Task List


We will cover the following tasks in 1 hour and 7 minutes:


Introduction to the Project and Dataset

We will understand the Rhyme interface and our learning environment. You will be provided with a cloud desktop with Jupyter Notebooks and all the software you will need to complete the project. Jupyter Notebooks are very popular with Data Science and Machine Learning Engineers as one can write code in cells and use other cells for documentation.

We’ll also learn about the Poker Hand dataset from the UCI Machine Learning Repository. The premise is that given some features of a hand of cards in a poker game, we should be able to predict the type of hand.

Next, we read the data from disk into a Pandas dataframe.


Separate the Data into Features and Targets

In this task, we will first manually label the columns and classes based on the dataset description from the UCI Repository.

Finally, we’ll separate the data into features X and targets y for further analysis.


Evaluating Class Balance

A very common question during model evaluation is, “Why isn’t the model I’ve picked predictive?”. After completing this task, you will have a good answer to this question. The idea centers on the imbalance between classes within your data. We will also learn best practices to accommodate for such imbalances such that they do not adversely affect model performance.


Up-sampling from Minority Classes

As a result of severe class imbalances, we’ll use Pandas to convert these rare classes into a single class that includes Flush or better.


Training a Random Forests Classifier

Now we’ll partition our poker hand data into training and test splits, so that we evaluate our fitted model on data that it wasn’t trained on. This will allow us to see how well our random forests model is balancing the bias/variance trade-off.


Classification Accuracy

In this short task, we will compute the classification accuracy score of our random forests model on the test data.


ROC Curve and AUC

Now that our model is fitted, we evaluate its performance using some of Yellowbrick’s visualizers for classification. With Yellowbrick’s implementation of ROCAUC we can evaluate a multi-class classifier. Yellowbrick does this by plotting the ROCAUC curve for each class as though it were it’s own binary classifier, all on one plot.


Classification Report Heatmap

The classification report displays the precision, recall, and F1 scores for the model. In order to support easier interpretation and problem detection, Yellowbrick’s implementation of ClassificationReport integrates numerical scores with a color-coded heatmap.

The classification report shows a representation of the main classification metrics on a per-class basis. This gives a deeper intuition of the classifier behavior over global accuracy which can mask functional weaknesses in one class of a multiclass problem.


Class Prediction Error

The Yellowbrick Class Prediction Error chart shows the support for each class in the fitted classification model displayed as a stacked bar. Each bar is segmented to show the distribution of predicted classes for each class. It is initialized with a fitted model and generates a class prediction error chart on draw. For my part, I find ClassPredictionError a convenient and easier-to-interpret alternative to the standard confusion matrix.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Snehan Kekre

About the Host (Snehan Kekre)


Snehan hosts Machine Learning courses at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, pursuing a double major in the Natural Sciences and Computational Sciences, with a focus on physics and machine learning. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Snehan Kekre) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Snehan Kekre) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own sessions. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select sessions and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Please email us at help@rhyme.com and we'll respond to you within one business day.

First 2 tasks free. Then, decide to pay $9.99 for the rest