TensorFlow (Advanced): Image Noise Reduction with Autoencoders

Autoencoding is an algorithm to help reduce dimensionality of data with the help of neural networks. It can be used for lossy data compression where the compression is dependent on the given data. This algorithm to reduce dimensionality of data as learned from the data can also be used for reducing noise in data - and that’s the example we are going to look at in this project.

Unavailable
TensorFlow (Advanced): Image Noise Reduction with Autoencoders

Task List


We will cover the following tasks in 46 minutes:


Introduction and Importing Libraries

We will import the libraries and helper functions that we will need during the course of this project. We will also understand a little bit about the Rhyme interface and pre-requisites for this project.


Data Preprocessing

For this project, we are using the popular MNIST dataset. This dataset has 60000 examples of images of handwritten digits in the training set and 10000 examples in the test set. The examples are black and white images of 28x28. As in, 28 rows and 28 columns for each example. The labels are simply digits corresponding to the 10 classes from 0 to 9.

We will create two neural network models in this project - one will be trained to perform classification of these handwritten digits. And another model will be used to de-noise input data. This is our Autoencoder. Eventually, we will connect the two models together and have them work in conjunction as a single, composite model. In order to input the examples to our two models, we will do a little bit of processing on them.


Adding Noise

We are artificially adding some noise to our training and test examples. You may wonder - why synthesize the noise to train the Autoencoder? This is because in real world applications, while we will often get noisy data, we will not have the corresponding clean labels. Instead, when we synthesize noise on already clean images, we can train an Autoencoder to focus on the important parts of the images and then when it’s applied to real world noisy data, it knows where to focus and which features to retain.


Building and Training a Classifier

In this task, we will create a classifier and train it to classify handwritten digit images. We will use a very straight-forward neural network with two hidden layers. These are fully connected or dense layers with 256 nodes each in both the layers. The output layer has 10 nodes for the 10 classes and of course, a softmax function to get the probability scores for various classes.

One tricky part here could be that we need to use sparse categorical crossentropy loss instead of the categorical crossentropy loss that we would have used if the labels were one-hot encoded. But since the labels are numerical values from 0 to 9 for the 10 classes in a single list with one value for each label, we would need to use the sparse categorical crossentropy. Of course, we could’ve one hot encoded our lables before and used the categorical crossentropy loss instead.


Building the Autoencoder

In order to reduce noise in our data, we want to create a model - the Autoencoder - which takes a noisy example as input and the original, corresponding example as the label. Now, if one or more hidden layers in this neural network has a lot less nodes as compared to the input and output, then the training process will force the network to learn a function similar to principal component analysis, essentially reducing dimensionality.

Another thing to note is that the output layer has the sigmoid activation. The higher linear values of the last layer will become closer to the maximum normalised pixel value of 1 and the low linear values will converge towards the minimum normalised pixel value 0. This choice of activation makes sense given the examples in the input are black and white images. There’s some scope for having a variety of pixel values but with sigmoid most of the values will converge to either 0 or 1 and that works well for us.


Training the Autoencoder

We will use the noisy training set examples as our examples and the original training set examples, the ones without any noise, will be used as our labels for the Autoencoder to learn de-noising. Let’s set the epochs to a somewhat higher number, 100 in this case, because we are going to use the early stopping callback.

Let’s use a batch size slightly higher than usual, it will help us speed up the training. We will also use a lambda callback to log out just the validation loss for each epoch. And we will set the verbose to False because we just want to see the validation loss per epoch.


Denoised Images

Now that the Autoencoder is trained, let’s put it to use. In order to get our de-noised images, say for our test data, all we have to do is pass the noisy data through the Autoencoder! Let’s use the predict method on our model to get the results.

We will also pass the de-noised images through our classifier and this time, it should perform significantly better.


Composite Model

Let’s create a composite model to complete our entire prediciton pipeline. What I mean is that we want a model in which we can simply feed a noisy image, and the model will first reduce noise in that image and then use this output image and run it through the Classifier to get the class prediction. Idea being that even if our incoming data in a production setting is noisy, our classifier should be able to work well because of the noise reduction from the Autoencoder.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Amit Yadav

About the Host (Amit Yadav)


I am a Software Engineer with many years of experience in writing commercial software. My current areas of interest include computer vision and sequence modelling for automated signal processing using deep learning as well as developing chatbots.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Amit Yadav) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Amit Yadav) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own projects. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Rhyme's visual instructions are somewhat helpful for reading impairments. The Rhyme interface has features like resolution and zoom that are slightly helpful for visual impairment. And, we are currently developing a close-caption functionality to help with hearing impairment. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. However, we still have a lot of work to do. If you have suggestions for accessibility, please email us at accessibility@rhyme.com
Please email us at help@rhyme.com and we'll respond to you within one business day.

No sessions available

More Projects by Amit Yadav