5.0 / 5
We will cover the following tasks in 55 minutes:
In this project we will create and train a neural network model to classify movie reviews taken from IMDB as either a positive review or a negative review. Essentially, we want to create and train a neural network model which, given a text review, will be able to predict if the overall sentiment of the review is more negative or more positive. We also take a quick look at the Rhyme interface.
The IMDB Reviews Dataset
We will import the data that we’d be working with. This is easily accessible in Keras. Once we load the data, we can unpack it to populate the training set and the test set. Both the training and test set have 25000 examples each.
When we load this dataset, we will set number of words to 10000. This means that only the most common 10000 words from the bag of words will be used and rest will be ignored.
Decoding the Reviews
Let’s decode our numeric representation of the examples back into text. We are still going to use just the encoded examples that we have already loaded when training our model but the decoding is just for our reference so that we can maybe read a couple of reviews and see if their labels seem to make sense for us.
For the decoding, we will need to create a dictionary with key value pairs like the word index we imported in the previous task. Except this new dictionary should have the word index values as keys and keys as values.
Padding the Examples
Let’s try to imagine how our neural network might look at this problem. We have a bunch of words as our input features and we want the network to predict, based on the features, if a particular set of features is a negative review or positive review. So, as it trains, it will start to assign some “meaning” to certain words which occur often in certain types of reviews. Maybe a word like “wonderful” will influence the network into thinking that the review is more positive, maybe a word like “terrible” will influence the network into thinking that the review is more negative. So, as it trains, it will assign how much influence and what influence various words in our vocabulary will have on the output.
Then, there’d be certain words like “a” or “the” which will seem pretty meaningless. This is because they are just articles and don’t have any inherent meaning at all.
In order to make our reviews all of same length, we can use one of these meaningless words to pad the reviews to, well make them all of same length!
An embedding layer will try to find some relation between various words. You have to specify the number of words we are looking to find an embedding for, in this case 10000, and we also need to specify the number of features that we are trying to learn from those words. Then, all the words are represented as these feature vectors. Let’s say we are asking the model to learn 16 features from all the training text examples that we will feed to the model. The embedding layer will learn a 10000 by 16 dimensional Word Embedding where each word has a feature representation of 16 values.
Creating and Training the Model
This is quite straight-forward. We will use the sequential class from Keras. And we will also import a few layers that we will need. We know, from our previous task, that we will need an Embedding layer (we will use 16 dimension for the feature representations), we will use need a GlobalAveragePooling layer which will convert our feature representations of 10,000 by 16 to a 16 dimension vector for each batch then can be then fed into a Dense layer with a rectified linear unit activation. Finally, we have another dense layer with a sigmoid activation function - this activation gives us a binary classification output for the two classes that we have.
Predictions and Evaluation
We will use
matplotlib to display the accuracy of our model during training for both the training and the validation set that we split from the training set. Remember that we split 20% of the training set into a validation set when we trained the model.
We will also take a look at a prediction and its corresponding review in text.
About the Host (Amit Yadav)
I have been writing code since 1993, when I was 11, and my first passion project started with a database management software that I wrote for a local hospital. More recently, I wrote an award winning education Chatbot for a multi-billion-revenue company. I solved a recurrent problem for my client where they wanted to make basic cyber safety and privacy education accessible for their users. This bot enabled my client to reach out to their customers with personalised and real-time education. In the last one year, I’ve continued my interest in this field by constantly learning and growing in Machine Learning, NLP and Deep Learning. I'm very excited to share my variety of experience and learnings with you with the help of Rhyme.com.