Google Cloud AI: End to End Deep Learning Part 3

Welcome to the Google Cloud AI: End to End Deep Learning Part 3! In this series, we are working towards creating a face verification system using Deep Learning. In this third project in a three project series on end to end deep learning, we will use our trained model created in the previous project and we will deploy it on Google Cloud AI platform so that we can finally run inferences on a deployed model using a REST API call.

Join for $9.99
Google Cloud AI: End to End Deep Learning Part 3

Task List


We will cover the following tasks in 52 minutes:


Introduction

In this task, we will launch our Notebook instance where we created and trained our model in the previous project. This instance has a GPU attached to it. We will also need to create a bucket on Google Storage. We will save our trained model here before we can run inference on it. Let’s create a new notebook where we will write code for deployment and inference during this project. We will also use googleapiclient.


Create a SavedModel

In this task, we will load the model that we trained in the previous project. We will note the name of input of this model. This is because the input name will be used as a key in the object that we will use later as input for this model. We will also need to convert our model to what is called a SavedModel format. We will use the save_keras_model helper function to achieve this.


Saved Model to Cloud Storage

We will now transfer our model, which is now in the saved_model folder, to google cloud storage. We will transfer the entire folder because while running inference, GCP needs not only the .pb file but also the assets and variables folders created along with it when the model is exported to this format. We had already created a bucket in the first task to store this model folder.


Prepare Input for Inference

In order to prepare the inputs for inference, we will create a create_input function. In our example, we just want to use the anchor, positive and negative arrays that we had already created in the first project in this series. We will use our model input’s name as key when creating a json object that we will need for inference. Note that we will need to create multi-dimensional arrays to a list format for the json object to work. Then, we save the json object on disk. We will use this object in one of the next tasks.


Deploy the Model

Now, we will deploy our model! The model is already saved in the right format on Google Storage in a bucket. In order to use Google AI Platform model deployment, we need the models to be deployed in a Google Storage bucket. We will create a new version for our Google AI Platform model and set that version to the location of our model folder on Google Storage. Once the deployment is complete, go back to the Notebook instance and use the create_input function to create some json objects that we can use as inputs when we run inference on the deployed model.


Run Inference

We will first run an inference from the terminal using gcloud ml-engine. Once this works, and a list of numbers is returned back, we know that everything is setup correctly and now we can start working on creating a method that will make a similar call as what we just did but it will be able to use the information that is returned (the feature vectors) and compare the euclidean distances between the various embedding vectors.


Predict Function

Let’s define a function called predict_json which will take a json object file path as argument. We need a service object in order to make a query using the googleapiclient. We will need to create a json object instance by reading the json object we created on disk before. Once we make a request to the deployed model via the Google Cloud AI Platform, a response will be sent back. This response has the predicted embedding vector in the key predictions.


Check Euclidean Distances

Of course, for our application, we will need to calculate euclidean distances between any two given embedding vectors. If this distance, between two predicted embedding vectors, is less than a certain threshold, then we know that the two images are of the same person’s face. To implement this, we will create a new function called check_distances. We will pass an index for a triplet to this function and the function will make calls to the model to get embedding vectors for anchor, positive and negative examples. Then, the function will calculate euclidean distances between the predicted embedding vectors for anchor and positive and between anchor and negative.


Final Results

The distance between predicted embedding vectors for anchor and positive should be generally much higher compared to the distance between the predicted embedding vectors for anchor and negative. We will check that out for a few triplets!

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Amit Yadav

About the Host (Amit Yadav)


I am a machine learning engineer with focus in computer vision and sequence modelling for automated signal processing using deep learning techniques. My previous experiences include leading chatbot development for a large corporation.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Amit Yadav) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Amit Yadav) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own projects. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Rhyme strives to ensure that visual instructions are helpful for reading impairments. The Rhyme interface has features like resolution and zoom that will be helpful for visual impairments. And, we are currently developing a close-caption functionality to help with hearing impairments. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. If you have questions related to accessibility, please email us at accessibility@rhyme.com
We started with windows and linux cloud desktops because they have the most flexibility in teaching any software (desktop or web). However, web applications like Salesforce can run directly through a virtual browser. And, others like Jupyter and RStudio can run on containers and be accessed by virtual browsers. We are currently working on such features where such web applications won't need to run through cloud desktops. But, the rest of the Rhyme learning, authoring, and monitoring interfaces will remain the same.
Please email us at help@rhyme.com and we'll respond to you within one business day.

Ready to join this 52 minutes session?

More Projects by Amit Yadav


Amazon SageMaker: Custom Scripts
Amazon SageMaker: Custom Scripts
1 hour and 14 minutes