We will cover the following tasks in 59 minutes:
Introduction and Loading the Corpus
We will understand the Rhyme interface and our learning environment. You will be provided with a cloud desktop with Jupyter Notebooks and all the software you will need to complete the project. Jupyter Notebooks are very popular with Data Science and Machine Learning Engineers as one can write code in cells and use other cells for documentation.
Next, we will load the
hobbies corpus from a local directory and explore its attributes.
Vectorizing the Documents
Text data requires special preparation before you can start using it for predictive modeling. The text must be parsed to remove words, called tokenization. Then the words need to be encoded as integers or floating point values for use as input to a machine learning algorithm, called feature extraction (or vectorization).
We will use scikit-learn’s
TfidfVectorizer to vectorize our corpus and then extract the documents and labels.
Clustering Similar Documents with Squared Euclidean Distance And Euclidean Distance
To effectively cluster similar documents, we will use t-distributed Stochastic Neighbor Embeddings. It decomposes high-dimensional document vectors into 2 dimensions using probability distributions from both the original dimensionality and the decomposed dimensionality, By decomposing to 2 or 3 dimensions, the documents can be visualized with a scatter plot.
We will use Yellowbrick’s
TSNEVisualizer, with two metrics: squared Euclidean distance and the Euclidean distance.We will find that the Euclidean distance is not an ideal metric to be using. That’s because when we vectorize a corpus, we end up with huge, sparse vectors.
Manhattan (aka “Taxicab” or “City Block”) Distance
In general, choosing which distance metric to use is an important, but often ignored modelling problem. A lot of practitioners end up using the
L2 distance just because it’s easy to remember. However, in many situations, the L1 or Manhattan distance makes more sense to use and is more robust.
In this task, we define what constitutes a good distance metric and apply the Manhattan distance to our text data.
Bray Curtis Dissimilarity and Canberra Distance
Here we take a look at two other non-Euclidean metrics and see if they are applicable to document similarity. Both metrics have a different set of assumptions and have been used historically for ecological purposes.
We can also measure vector similarity with cosine distance, using the cosine of the angle between the two vectors to assess the degree to which they share the same orientation.
Cosine distance is often an excellent option for text data because it corrects for any variations in the length of the documents (since we’re measuring the angle between vectors rather than their magnitudes). Moreover, it can be a very efficient way to compute distance with sparse vectors because it considers only the non-zero elements.
What Metrics Not to Use
There are also a bucket full of distance metrics that make no sense at all for sparse, non-numeric data. Let’s take a look!
Omitting Class Labels - Using KMeans Clustering
In this task, we experiment with omitting the class labels entirely during visualization to see if any meaningful patterns are observed.
We can then use cluster membership from K-Means to label each document. This will allow us to look for clusters of related text by their contents.
About the Host (Snehan Kekre)
Snehan hosts Machine Learning courses at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, pursuing a double major in the Natural Sciences and Computational Sciences, with a focus on physics and machine learning. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.