scikit-learn: Logistic Regression for Sentiment Analysis

In this project, we will learn the fundamentals of sentiment analysis and apply our knowledge to classify movie reviews as either positive or negative. We will use the popular IMDB dataset. Our goal is to use a simple logistic regression model from Scikit-Learn for document classification.

Start for Free
First 2 tasks free. Then, decide to pay $9.99 for the rest
scikit-learn: Logistic Regression for Sentiment Analysis

Duration (mins)

Learners

5.0 / 5

Rating

Task List


We will cover the following tasks in 1 hour and 26 minutes:


Introduction and Importing the Data

In this task, we get an overview of this project and get ourselves familiar with the popular IMDB movie review dataset.


Transforming Documents into Feature Vectors

We will get a description of what logistic regression is and why we use it for sentiment analysis. Once we have clear idea of the features and model, we will encounter our first natural language processing concept. Namely, the bag-of-words model. From scikit-learn, all the fit_transform method on CountVectorizer. This will construct the vocabulary of the bag-of-words model and transform the provided sample sentences into sparse feature vectors.


Term Frequency-Inverse Document Frequency

In information retrieval and text mining, we often observe words that crop up across our corpus of documents. These words can lead to bad performance during training and test time because they usually don’t contain useful information. In this task, we will understand and implement a useful statistical technique to mitigate this — Term frequency-inverse document frequency (tf-idf) can be used to downweight these class of words in our feature vector representation. The tf-idf is the product of the term frequency and the inverse document frequency.


Calculate TF-IDF of the Term 'Is'

We continue with the example from Task 3 and calculate the tf-idf of a term by hand. Next, we use scikit-learn’s TfidfTransformer to convert our sample text into a vector of tf-idf values and apply the L2-normalization to it.


Data Preparation

Cleaning and preprocessing text data is a vital process in data analysis and especially in natural language processing tasks. In this task, we will take a look at a few reviews from our dataset and learn how to strip them of irrelevant characters like HTML tags, punctuation, and emojis using regular expressions.


Tokenization of Documents

In this task, we learn how to represent our data as a collection of words or tokens. We will also perform word-level preprocessing tasks such as stemming. To accomplish this, we use The Natural Language Toolkit (nltk) in Python.


Document Classification Using Logistic Regression

First, we split our into training and test sets of equal size. We then create a pipeline to build a logistic regression model. To estimate the best parameters and model, we employ cross-validated grid-search over a parameter grid.


Load Saved Model from Disk

Although the time it takes to train logistic regression models is very little, estimating the best parameters for our model using GridSearchCV can take hours given the size of our trainingset. So in this task, we load a pre-trained model that we will later use to find the best parameter settings, cross validation score, and the test accuracy.


Model Accuracy

In this final task, we take a look at the best parameter settings, cross-validation score, and how well our model classifies the sentiments of reviews it has never seen before from the test set.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Snehan Kekre

About the Host (Snehan Kekre)


Snehan hosts Machine Learning courses at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, pursuing a double major in the Natural Sciences and Computational Sciences, with a focus on physics and machine learning. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Snehan Kekre) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Snehan Kekre) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own sessions. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select sessions and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Please email us at help@rhyme.com and we'll respond to you within one business day.

First 2 tasks free. Then, decide to pay $9.99 for the rest