Project: Image Super Resolution using Autoencoders in Keras

Welcome to this hands-on project on Image Super Resolution using Autoencoders in Keras. In this project, you’re going to learn what autoencoders are, use Keras with Tensorflow as its backend to train your own autoencoder, and use this deep learning powered autoencoder to significantly enhance the quality of images. That is, our neural network will create high-resolution images from low-res source images.

I’m almost certain that you have been introduced to the idea of super resolution in the past. Most commonly, you may have seen them in TV shows and movies about law enforcement going after criminals. How it usually played out is that law enforcement would have access to CCTV footage of the suspect or the suspect’s vehicle. But the problem is that the video is either too blurred, pixelated, and generally of low quality. So they run the video feed through this fancy software that enhances the image quality and suddenly you can clearly see the suspect’s face or read their license plate. At the time of filming these shows, pre-2015, the technology was still not mature enough and bordered on science-fiction. The technology used to upscale images using techniques like cubic and bi-cubic interpolation perform poorly. But now thanks to deep learning, it is very much a real technology that is used by both private and govt actors.

Besides the hype of super resolution in sci-fi media, In reality, in many fields like astronomy and tomographic imaging, the acquired image contains artifacts, noise and is often of low resolution. These degradations often come from limitation of the sensors.

Join for Free
Project: Image Super Resolution using Autoencoders in Keras

Duration (mins)

Learners

NA / 5

Rating

Task List


We will cover the following tasks in 1 hour and 4 minutes:


Project Overview and Import Libraries


What are Autoencoders?


Build the Encoder


Build the Decoder to Complete the Network


Create Dataset and Specify Training Routine


Load the Dataset and Pre-trained Model


Model Predictions and Visualizing the Results

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Snehan Kekre

About the Host (Snehan Kekre)


Snehan Kekre is a Machine Learning and Data Science Instructor at Coursera. He studied Computer Science and Artificial Intelligence at Minerva Schools at KGI, based in San Francisco. His interests include AI safety, EdTech, and instructional design. He recognizes that building a deep, technical understanding of machine learning and AI among students and engineers is necessary in order to grow the AI safety community. This passion drives him to design hands-on, project-based machine learning courses on Rhyme.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Snehan Kekre) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Snehan Kekre) has already installed all required software and configured all data.
Absolutely! Your host (Snehan Kekre) has provided this session completely free of cost!
You can go to https://rhyme.com, sign up for free, and follow this visual guide How to use Rhyme to create your own projects. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Rhyme strives to ensure that visual instructions are helpful for reading impairments. The Rhyme interface has features like resolution and zoom that will be helpful for visual impairments. And, we are currently developing a close-caption functionality to help with hearing impairments. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. If you have questions related to accessibility, please email us at accessibility@rhyme.com
We started with windows and linux cloud desktops because they have the most flexibility in teaching any software (desktop or web). However, web applications like Salesforce can run directly through a virtual browser. And, others like Jupyter and RStudio can run on containers and be accessed by virtual browsers. We are currently working on such features where such web applications won't need to run through cloud desktops. But, the rest of the Rhyme learning, authoring, and monitoring interfaces will remain the same.
Please email us at help@rhyme.com and we'll respond to you within one business day.

Ready to join this 1 hour and 4 minutes session for free?

More Projects by Snehan Kekre