At the end of 2017 Facebook released their newest addition to their “tagging engine” that makes use of facial recognition technology. This feature notifies a user when their face is in any picture posted by friends, strangers or other pages, even if the poster has not explicitly tagged them. In order to do so, Facebook combines machine learning techniques such as neural networks and deep learning within the field of computer vision – which focuses on processing and understanding digital images and more. Pretty awesome right? Let’s have a closer look and unravel the mystery of Facebook’s tagging system.

Facebook new feature

Facebook has been using the face identification technology since 2010 in order to make tagging easier. Every time the social media platform suggested to you that your grandma is most probably present in the picture you were just about to post, that is because Facebook recognized her face in the image based on the identifier it associates with her. Building on this, Facebook’s newest feature now notifies users if the facial recognition engine recognizes the user’s face in any pictures posted by friends, strangers or other pages, even if the poster has not explicitly tagged them.

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Not available in Europe ?

However, it is possible that you have never gotten such a notification yet, or for that matter you never even got the “grandma is probably in your picture” suggestion. That is because everything having to do with Facebook’s facial recognition functionalities are not currently available in Canada and the EU because of region-specific data privacy laws. The objections are that a company or a piece of software might violate a user’s privacy by storing and making use of information about their faces gathered from online pictures and processed without their knowledge and explicit consent.

97,35 % accuracy

Facebook has made facial recognition a priority, acquiring a number of specialized startups. Currently, its research project “DeepFace” that is reporting results as accurate as 97.35% in facial recognition – comparable to human precision and 27% better than the current state of the art. The engine combines machine learning techniques such as neural networks and deep learning.

So how does facial recognition work?

While the specifics used by each company can drastically change the performance of their technology, the general intuition behind facial recognition can be explained in more simple terms. The first factor coming into play is data. As with most machine learning techniques, it is impossible to achieve anything without the proper data, both in amount and quality. Each business can use user data, or for cases such as China’s initiative, the data can come from national databases. With access to a dataset of enough faces, one approach is to find the “baseline” out of our set. This means the most “average” face that has the most in common with the other instances. Then, each other face is compared to our baseline through a series of mathematical processes based on features we choose: perhaps the size of the eyes, the distance between the nose and the mouth or skin tone. Thus, ever face is now identified as a matrix containing the additional features it needs on top of the baseline, similar to a recipe.

Your face as a pizza?

In a crude example, think of how all pizzas start with dough, cheese and tomato sauce. So we can say the Margherita is probably the baseline. You can then build upon this always adding different ingredients which lead to different unique pizzas, so when you describe what you are looking for you usually just mention the “extra” ingredients rather than explaining to the waiter you want a dough made out of water, salt and flour, which should be heated in the oven, etc… That makes your ingredients the matrix of features that define the pizza.

Sectors that benefit from facial recognition technology

As we’ve seen Facebook uses facial recognition to make pictures easier to tag. However, there are many more sectors that benefit from this technology. Facial recognition can be used to identify people: looking at your device in order to log in, or as an extra measure before making a payment. An employer can use it to vet temporary employees hired for one short-term job, or enhance security cameras in high-risk locations such as airports or banks. And while these uses would be more highly regulates, we can use computer vision without the need to identify the face we are scanning.

A dog face filter in Snapchat

In some cases we don’t need to generate information about the faces we scan, such is the case with enhancing features – like face filters or simple photo corrections as we’ve seen in social media apps like Snapchat or Instagram. However, we can analyze faces for insights that do not relate with identity: general information such as age, gender or race can help businesses to better understand the demographics attending an event or browsing a particular section of a store. Additionally, we can identify emotions – you can gather information about the satisfaction of your customers in the waiting line at the cash register or while they complete a survey to gather feedback and determine which steps of your processes need improvement.

And this is only scratching the surface of what computer vision has to offer – definitely a field to keep an eye out for in the near future!