We will cover the following tasks in 1 hour and 26 minutes:
Introduction and Importing the Data
In this task, we get an overview of this project and get ourselves familiar with the popular IMDB movie review dataset.
Transforming Documents into Feature Vectors
We will get a description of what logistic regression is and why we use it for sentiment analysis. Once we have clear idea of the features and model, we will encounter our first natural language processing concept. Namely, the bag-of-words model.
From scikit-learn, all the
fit_transform method on
CountVectorizer. This will construct the vocabulary of the bag-of-words model and transform the provided sample sentences into sparse feature vectors.
Term Frequency-Inverse Document Frequency
In information retrieval and text mining, we often observe words that crop up across our corpus of documents. These words can lead to bad performance during training and test time because they usually don’t contain useful information. In this task, we will understand and implement a useful statistical technique to mitigate this — Term frequency-inverse document frequency (tf-idf) can be used to downweight these class of words in our feature vector representation. The tf-idf is the product of the term frequency and the inverse document frequency.
Calculate TF-IDF of the Term 'Is'
We continue with the example from Task 3 and calculate the tf-idf of a term by hand. Next, we use scikit-learn’s
TfidfTransformer to convert our sample text into a vector of tf-idf values and apply the L2-normalization to it.
Cleaning and preprocessing text data is a vital process in data analysis and especially in natural language processing tasks. In this task, we will take a look at a few reviews from our dataset and learn how to strip them of irrelevant characters like HTML tags, punctuation, and emojis using regular expressions.
Tokenization of Documents
In this task, we learn how to represent our data as a collection of words or tokens. We will also perform word-level preprocessing tasks such as stemming. To accomplish this, we use The Natural Language Toolkit (
nltk) in Python.
Document Classification Using Logistic Regression
First, we split our into
test sets of equal size. We then create a pipeline to build a logistic regression model. To estimate the best parameters and model, we employ cross-validated grid-search over a parameter grid.
Load Saved Model from Disk
Although the time it takes to train logistic regression models is very little, estimating the best parameters for our model using
GridSearchCV can take hours given the size of our
trainingset. So in this task, we load a pre-trained
model that we will later use to find the best parameter settings, cross validation score, and the test accuracy.
In this final task, we take a look at the best parameter settings, cross-validation score, and how well our
model classifies the sentiments of reviews it has never seen before from the
About the Host (Snehan Kekre)
Snehan hosts Machine Learning courses at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, pursuing a double major in the Natural Sciences and Computational Sciences, with a focus on physics and machine learning. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.