Machine Learning Visualization: Visual Analysis of Document Similarity

Tasks such as assessing document similarity, topic modelling and other text mining endeavors are predicated on the notion of “closeness” or “similarity” between documents. In this project, we define various distance metrics (e.g. Euclidean, Hamming, Cosine, Manhattan, etc) and understand their merits and shortcomings as they relate to document similarity. We will apply these metrics on documents within a specific corpus and visualize our results.

Start for Free
First 2 tasks free. Then, decide to pay $9.99 for the rest
Machine Learning Visualization: Visual Analysis of Document Similarity

Task List

We will cover the following tasks in 59 minutes:

Introduction and Loading the Corpus

We will understand the Rhyme interface and our learning environment. You will be provided with a cloud desktop with Jupyter Notebooks and all the software you will need to complete the project. Jupyter Notebooks are very popular with Data Science and Machine Learning Engineers as one can write code in cells and use other cells for documentation.

Next, we will load the hobbies corpus from a local directory and explore its attributes.

Vectorizing the Documents

Text data requires special preparation before you can start using it for predictive modeling. The text must be parsed to remove words, called tokenization. Then the words need to be encoded as integers or floating point values for use as input to a machine learning algorithm, called feature extraction (or vectorization).

We will use scikit-learn’s TfidfVectorizer to vectorize our corpus and then extract the documents and labels.

Clustering Similar Documents with Squared Euclidean Distance And Euclidean Distance

To effectively cluster similar documents, we will use t-distributed Stochastic Neighbor Embeddings. It decomposes high-dimensional document vectors into 2 dimensions using probability distributions from both the original dimensionality and the decomposed dimensionality, By decomposing to 2 or 3 dimensions, the documents can be visualized with a scatter plot.

We will use Yellowbrick’s TSNEVisualizer, with two metrics: squared Euclidean distance and the Euclidean distance.We will find that the Euclidean distance is not an ideal metric to be using. That’s because when we vectorize a corpus, we end up with huge, sparse vectors.

Manhattan (aka “Taxicab” or “City Block”) Distance

In general, choosing which distance metric to use is an important, but often ignored modelling problem. A lot of practitioners end up using the L2 distance just because it’s easy to remember. However, in many situations, the L1 or Manhattan distance makes more sense to use and is more robust.

In this task, we define what constitutes a good distance metric and apply the Manhattan distance to our text data.

Bray Curtis Dissimilarity and Canberra Distance

Here we take a look at two other non-Euclidean metrics and see if they are applicable to document similarity. Both metrics have a different set of assumptions and have been used historically for ecological purposes.

Cosine Distance

We can also measure vector similarity with cosine distance, using the cosine of the angle between the two vectors to assess the degree to which they share the same orientation.

Cosine distance is often an excellent option for text data because it corrects for any variations in the length of the documents (since we’re measuring the angle between vectors rather than their magnitudes). Moreover, it can be a very efficient way to compute distance with sparse vectors because it considers only the non-zero elements.

What Metrics Not to Use

There are also a bucket full of distance metrics that make no sense at all for sparse, non-numeric data. Let’s take a look!

Omitting Class Labels - Using KMeans Clustering

In this task, we experiment with omitting the class labels entirely during visualization to see if any meaningful patterns are observed.

We can then use cluster membership from K-Means to label each document. This will allow us to look for clusters of related text by their contents.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Snehan Kekre

About the Host (Snehan Kekre)

Snehan hosts Machine Learning courses at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, pursuing a double major in the Natural Sciences and Computational Sciences, with a focus on physics and machine learning. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.

Frequently Asked Questions

In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Snehan Kekre) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Snehan Kekre) has already installed all required software and configured all data.
You can go to, sign up for free, and follow this visual guide How to use Rhyme to create your own sessions. If you have custom needs or company-specific environment, please email us at
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select sessions and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at
Please email us at and we'll respond to you within one business day.

First 2 tasks free. Then, decide to pay $9.99 for the rest