# scikit-learn: Logistic Regression for Sentiment Analysis

In this project, we will learn the fundamentals of sentiment analysis and apply our knowledge to classify movie reviews as either positive or negative. We will use the popular IMDB dataset. Our goal is to use a simple logistic regression model from Scikit-Learn for document classification.

Duration (mins)

Learners

#### 5.0 / 5

Rating

We will cover the following tasks in 1 hour and 26 minutes:

### Introduction and Importing the Data

In this task, we get an overview of this project and get ourselves familiar with the popular IMDB movie review dataset.

### Transforming Documents into Feature Vectors

We will get a description of what logistic regression is and why we use it for sentiment analysis. Once we have clear idea of the features and model, we will encounter our first natural language processing concept. Namely, the bag-of-words model. From scikit-learn, we call the `fit_transform` method on `CountVectorizer`. This will construct the vocabulary of the bag-of-words model and transform the provided sample sentences into sparse feature vectors.

### Term Frequency-Inverse Document Frequency

In information retrieval and text mining, we often observe words that crop up across our corpus of documents. These words can lead to bad performance during training and test time because they usually don’t contain useful information. In this task, we will understand and implement a useful statistical technique to mitigate this — Term frequency-inverse document frequency (tf-idf) can be used to downweight these class of words in our feature vector representation. The tf-idf is the product of the term frequency and the inverse document frequency.

### Calculate TF-IDF of the Term 'Is'

We continue with the example from Task 3 and calculate the tf-idf of a term by hand. Next, we use scikit-learn’s `TfidfTransformer` to convert our sample text into a vector of tf-idf values and apply the L2-normalization to it.

### Data Preparation

Cleaning and preprocessing text data is a vital process in data analysis and especially in natural language processing tasks. In this task, we will take a look at a few reviews from our dataset and learn how to strip them of irrelevant characters like HTML tags, punctuation, and emojis using regular expressions.

### Tokenization of Documents

In this task, we learn how to represent our data as a collection of words or tokens. We will also perform word-level preprocessing tasks such as stemming. To accomplish this, we use The Natural Language Toolkit (`nltk`) in Python.

### Document Classification Using Logistic Regression

First, we split our into `training` and `test` sets of equal size. We then create a pipeline to build a logistic regression model. To estimate the best parameters and model, we employ cross-validated grid-search over a parameter grid.

### Load Saved Model from Disk

Although the time it takes to train logistic regression models is very little, estimating the best parameters for our model using `GridSearchCV` can take hours given the size of our `training`set. So in this task, we load a pre-trained `model` that we will later use to find the best parameter settings, cross validation score, and the test accuracy.

### Model Accuracy

In this final task, we take a look at the best parameter settings, cross-validation score, and how well our `model` classifies the sentiments of reviews it has never seen before from the `test` set.

## Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

## About the Host (Snehan Kekre)

Snehan Kekre is a Machine Learning and Data Science Instructor at Coursera. He studied Computer Science and Artificial Intelligence at Minerva Schools at KGI, based in San Francisco. His interests include AI safety, EdTech, and instructional design. He recognizes that building a deep, technical understanding of machine learning and AI among students and engineers is necessary in order to grow the AI safety community. This passion drives him to design hands-on, project-based machine learning courses on Rhyme.

##### How is this different from YouTube, PluralSight, Udemy, etc.?
In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Snehan Kekre) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
##### Is this session really free?
Absolutely! Your host (Snehan Kekre) has provided this session completely free of cost!
##### Can I buy Rhyme sessions for my company or learning institution?
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
##### What kind of accessibility options does Rhyme provide?
Rhyme strives to ensure that visual instructions are helpful for reading impairments. The Rhyme interface has features like resolution and zoom that will be helpful for visual impairments. And, we are currently developing a close-caption functionality to help with hearing impairments. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. If you have questions related to accessibility, please email us at accessibility@rhyme.com
##### Why don't you just use containers or virtual browsers?
We started with windows and linux cloud desktops because they have the most flexibility in teaching any software (desktop or web). However, web applications like Salesforce can run directly through a virtual browser. And, others like Jupyter and RStudio can run on containers and be accessed by virtual browsers. We are currently working on such features where such web applications won't need to run through cloud desktops. But, the rest of the Rhyme learning, authoring, and monitoring interfaces will remain the same.
##### I have a different question
Please email us at help@rhyme.com and we'll respond to you within one business day.

## More Projects by Snehan Kekre

43 minutes
###### Classify Toxic Comments using Convolutions with Keras
1 hour and 11 minutes
42 minutes
57 minutes
41 minutes
52 minutes
###### Project: Facial Expression Recognition in Keras
1 hour and 24 minutes
###### Project: Real-Time Object Detection with YOLOv3
1 hour and 5 minutes
###### Project: Image Super Resolution using Autoencoders in Keras
1 hour and 4 minutes
53 minutes
52 minutes
45 minutes
58 minutes
57 minutes
###### Predictive Analytics for Business with H2O and R
1 hour and 6 minutes
47 minutes
43 minutes
###### Create Interactive Dashboards with Streamlit and Python
1 hour and 33 minutes
50 minutes
53 minutes
###### Project: Regression Analysis with Yellowbrick
1 hour and 10 minutes
55 minutes
###### Project: Named Entity Recognition using LSTMs with Keras
1 hour and 10 minutes
47 minutes
###### Build a Machine Learning Web App with Streamlit and Python
1 hour and 21 minutes
###### Logistic Regression with NumPy and Python
1 hour and 3 minutes
47 minutes
58 minutes
###### Explainable Machine Learning with LIME and H2O in R
1 hour and 24 minutes
###### Automatic Machine Learning with H2O AutoML
1 hour and 1 minute
58 minutes
###### Build a Data Science Web App with Streamlit and Python
1 hour and 21 minutes
###### Project: Support Vector Machines with scikit-learn
1 hour and 40 minutes
48 minutes
52 minutes
52 minutes
###### Project: Predict Employee Turnover with scikit-learn
1 hour and 4 minutes
###### Project: Predictive Modelling with Azure Machine Learning Studio
1 hour and 6 minutes
###### Project: Anomaly Detection in Time Series Data with Keras
1 hour and 3 minutes
59 minutes
###### Project: Generate Synthetic Images with DCGANs in Keras
1 hour and 7 minutes
###### Project: Perform Feature Analysis with Yellowbrick
1 hour and 15 minutes
52 minutes