Google Cloud AI: End to End Deep Learning Part 1

Welcome to the Google Cloud AI: End to End deep learning project! In this series, we will create a face verification system using Deep Learning. In this first project in a three project series on end to end deep learning, we will create a pre-processed dataset that we will use to train our model later in the next project. Eventually, we will create and train the model and deploy that model in Google Cloud for inference.

Join for $9.99
Google Cloud AI: End to End Deep Learning Part 1

Duration (mins)

Learners

5.0 / 5

Rating

Task List


We will cover the following tasks in 60 minutes:


Introduction and Setup

Welcome to this end to end deep learning project. We will create a face verification system using Deep Learning and in this first project in a three project series on end to end deep learning, we will create the dataset that we will use to train our model later. Before we get started, we will also installed a couple of packages that we will use in this project.


Downloading the LFW Data

Labeled Faces in Wild or LFW is a popular dataset which has thousands of face images of thousands of celebrities. We are going to use a deepfunneled version of this dataset for our project. The deepfunneled version is pre-processed to some degree where faces are aligned and positioned correctly - making our job a little easier. We will also need the names of all the classes in this dataset. The images in the dataset are already in their corresponding folders in a tar file. In this task, we will extract this tar file once it’s downloaded. We will also take a look at the classes in the lfw-names.txt file.


A Look at the LFW Data

Let’s import our text file in the notebook and create a list of all the classes which have more than one image in our dataset. This is because we want to use a loss function called triplet loss which will need at least two image examples of every class. Once we have selected the classes that have more than one example in the dataset, we will also take a look at some of the images just to familiarize ourselves with the image data and also to take a look at the affect of deepfunneling.


Setup GPU and CUDA Drivers

Before we can continue, we will need to add a GPU to our Notebook instance and also install the CUDA drivers. This is because we are using a pre-existing package to detect faces from our images which, in turn, uses the dlib package which runs on GPU. Working with GCP actually makes it much simpler to setup GPU and CUDA drivers which are notorious to install and setup. So, we are in luck in that sense! Once everything is setup, we will move on to the next task.


TensorFlow GPU and Helper Functions

Once the GPU is setup, we will need to install the tensorflow-gpu package which is the GPU version of TensorFlow. We need this package for TensorFlow to take advantage of the GPU that we just added to our Notebook instance. Next, we will import a bunch of packages into our Notebook that we will need during this project along with some helper functions from the Keras image pre-processing library. Let’s also create a few directories to store the dataset of NumPy arrays that we plan on creating.


Create Dataset Part 1

We will define a function called get_image which will extract just the faces out of our image examples given the class name and an image number for that class. This function will return the extracted face images from any given image. We will also resize our cropped or extracted face image. Additionally, we will create a function to plot a single example of image triplets. We will later use this function to print out some of the examples as we convert our triplet image examples to triplets of NumPy arrays.


Create Dataset Part 2

Let’s define the size for our cropped out faces. In this task, we finally create our dataset of NumPy arrays. Inside a for loop, we will select an anchor, a positive example of the same class as the anchor class and a negative example. The loop runs for the total number of classes which have at least one image example in the dataset, ultimately creating the same number of triplets. Of course, we could create more triplets because many classes has more than 2 examples as well but let’s keep things relatively simple in this project. We are creating a dataset of NumPy arrays after not only cropping the face images, but also after doing pre-processing with the help of Keras’ image pre-processing helper function. This will be a huge time saver when we train the model later.


Wrapping Up

Let’s take a look at the newly created dataset of the NumPy arrays. Sometimes the face recognition or location detection may fail in the previous step - though it happens rarely. Let’s check how many triplet examples were actually created. We will also load one of the saved NumPy files to make sure our dataset creation worked correctly.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Amit Yadav

About the Host (Amit Yadav)


I am a machine learning engineer with focus in computer vision and sequence modelling for automated signal processing using deep learning techniques. My previous experiences include leading chatbot development for a large corporation.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Amit Yadav) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Amit Yadav) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own projects. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Rhyme strives to ensure that visual instructions are helpful for reading impairments. The Rhyme interface has features like resolution and zoom that will be helpful for visual impairments. And, we are currently developing a close-caption functionality to help with hearing impairments. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. If you have questions related to accessibility, please email us at accessibility@rhyme.com
We started with windows and linux cloud desktops because they have the most flexibility in teaching any software (desktop or web). However, web applications like Salesforce can run directly through a virtual browser. And, others like Jupyter and RStudio can run on containers and be accessed by virtual browsers. We are currently working on such features where such web applications won't need to run through cloud desktops. But, the rest of the Rhyme learning, authoring, and monitoring interfaces will remain the same.
Please email us at help@rhyme.com and we'll respond to you within one business day.

Ready to join this 60 minutes session?

More Projects by Amit Yadav


Amazon SageMaker: Custom Scripts
Amazon SageMaker: Custom Scripts
1 hour and 14 minutes