TensorFlow (Advanced): Face Verification System

Welcome to this project in which we will create a Face Verification system with the help of a deep neural network. We will use a CNN model to get deep facial feature descriptor from a face image and compare it with the facial feature descriptor from another image. If this difference is a lot, our system will know that the two faces are not of the same person. If this difference is small, then we can conclude that the two faces are of the same person. This will work even if you have two very different face images but of the same person because the CNN model is trained on recognising facial features.

Join for $9.99
TensorFlow (Advanced): Face Verification System

Task List


We will cover the following tasks in 34 minutes:


Introduction and Importing Libraries

This project is based on the Deep Face Recognition paper presented in the British Machine Vision Conference, 2015 by a group of researchers led by O.M.Parkhi. They used the popular VGG16 architecture and trained it on a few thousand human face images downloaded from the internet. We will get to how this model was trained in one of the tasks in this project. As a starting point, we will import the libraries and helper functions that we will need in this project.


Face Descriptor

The researchers who trained the VGG Face model used a bunch of face images to train their model and one of their objectives was to train a model which could be applicable to face verification tasks. First, trained a face classifier with their data of 2662 unique individuals. This was setup as a classification problem with 2622 classes.

Because of this classification training, the model had to learn a lot of facial feature nuances in order to be able to give high enough accuracy across 2622 individuals. And we can use a deep layer to extract these feature nuances, called face descriptor, and use that output as a somewhat smaller dimension represenation of the learned features. Then, when we try to do face verification, we can simply compare this representation of two face images and if the two representations are somewhat similar, then we can conclude that they are of the same person and if the representations are different, then the images are probably not of the same person.


Image Preprocessing

We want to preprocess our images before we feed them to our face descriptor. Fortunately, Keras comes with a bunch of helper functions that we can use. We are going to use one image at a time to get inference and then use two feature descriptor comparisons. So, we will first resize the images to make them suitable for our model and then convert the images into NumPy arrays.


Euclidean Distance

We are going to implement a simple function that will calculate euclidean distance between two vectors. These vectors are basically going to be our outputs from the face descriptor model given two images as inputs - one image at a time. Even though it’s obvious, but let me remind you that we are not going to train the face descriptor. We are using the pre-trained weights from the VGG Face model. So, there’s not going to be any training involved but we are simply using a vector output of a deep layer from the pre-trained model as our facial feature descriptor. So, when we get this vector for one image, that represents a bunch of complex facial features for that image. In order to verify if the two images are of the same person’s face, we will get the two face descriptor vector outputs for the two images and see if they are similar.


Face Verification

Finally, we have everything we need for the face verification system setup. We have the face descriptor, we have the function that gives us preprocessed images as NumPy arrays in the shape that we’d need and we have a function to calculate euclidean distance between two vectors.

Let’s put it all to use and create a function the perform face verification. We’re going to set a threshold which seems to work well during testing. But this can be changed depending on how well it performs. Consider this to be a hyperparameter that, given more validation, can be adjusted.


Final Results

Finally, we use our verify method written in the previous task to make comparisons between various photos of faces. It’s interesting to note that this method works even if the photos of a person are in different colour, has different angles or lighting and also if they were taken at different ages.

Watch Preview

Preview the instructions that you will follow along in a hands-on session in your browser.

Amit Yadav

About the Host (Amit Yadav)


I am a machine learning engineer with focus in computer vision and sequence modelling for automated signal processing using deep learning techniques. My previous experiences include leading chatbot development for a large corporation.



Frequently Asked Questions


In Rhyme, all projects are completely hands-on. You don't just passively watch someone else. You use the software directly while following the host's (Amit Yadav) instructions. Using the software is the only way to achieve mastery. With the "Live Guide" option, you can ask for help and get immediate response.
Nothing! Just join through your web browser. Your host (Amit Yadav) has already installed all required software and configured all data.
You can go to https://rhyme.com/for-companies, sign up for free, and follow this visual guide How to use Rhyme to create your own projects. If you have custom needs or company-specific environment, please email us at help@rhyme.com
Absolutely. We offer Rhyme for workgroups as well larger departments and companies. Universities, academies, and bootcamps can also buy Rhyme for their settings. You can select projects and trainings that are mission critical for you and, as well, author your own that reflect your own needs and tech environments. Please email us at help@rhyme.com
Rhyme strives to ensure that visual instructions are helpful for reading impairments. The Rhyme interface has features like resolution and zoom that will be helpful for visual impairments. And, we are currently developing a close-caption functionality to help with hearing impairments. Most of the accessibility options of the cloud desktop's operating system or the specific application can also be used in Rhyme. If you have questions related to accessibility, please email us at accessibility@rhyme.com
We started with windows and linux cloud desktops because they have the most flexibility in teaching any software (desktop or web). However, web applications like Salesforce can run directly through a virtual browser. And, others like Jupyter and RStudio can run on containers and be accessed by virtual browsers. We are currently working on such features where such web applications won't need to run through cloud desktops. But, the rest of the Rhyme learning, authoring, and monitoring interfaces will remain the same.
Please email us at help@rhyme.com and we'll respond to you within one business day.

Ready to join this 34 minutes session?

More Projects by Amit Yadav


Amazon SageMaker: Custom Scripts
Amazon SageMaker: Custom Scripts
1 hour and 14 minutes