Detecting Faces With The Azure Face API Using Python

In this article, I will show you how you can build an application in Python that can detect and compare faces using the Microsoft Azure Face API. The Microsoft Face API is a part of the larger Azure Cognitive Service hosted on Microsoft Azure. As a result, you will need to have an Azure account if you would like to try the code provided.

Azure cognitive


The solution uses Python programming language to make a call to the Azure Face API. The call in itself is rather simple; you just need to send an HTTP request to the service. In the code, the service I use is hosted in the SouthWest region, so the URI to my Face API service is – make sure to use the right service API URI.

Another important detail about the approach of this demo is that the code relies on an initial picture taken with the camera, and saved to disk in a specific location. This was done to simplify the code. As a result, you will need to first take a picture of the face you want to recognize before running the code. If you run the code on a Raspbian operating system, use this command to take a picture.

raspistill -v -o test.jpg

The baseline picture will be saved in a file called test.jpg. By default the code expects this file to be located in the /home/pi/ subdirectory; simply change the base_face_file variable if you are storing the file elsewhere.

Last, but not least, make sure the baseline picture you take has proper lighting and is right-side up. Most image recognition failures are due to improper alignment of the camera or poor lighting conditions.


Let’s do this. The first method of the code is called capturePicture(). The purpose of this method is simple: take a picture through the camera and save to disk.

The next method, getFaceId() calls the Azure Face API to retrieve a unique identifier for the last picture taken. This is a necessary preliminary step; the Azure Face API keeps a list of recent pictures and indexes them for up to 24 hours. If no face is detected, the method returns an empty string. To make the HTTP call, two headers are required,

Ocp-Apim-Subscription-Key and Content-Type. The first one is your Cognitive Service API key, and the second should be "application/octet-stream". The body of the request is simply the content of the image as binary.

The third method, compareFaces(), sends two FaceId values to the same Face API service and returns whether or not they are identical. As the program starts, it automatically requests the FaceId of the picture initially taken, saved in the test.jpg file. The other FaceId is the one from the last picture taken. The HTTP headers are similar for this call, except that the Content-Type header should be set to "application/json". The call returns a JSON payload; the Python code simply reads the value of the isIdentical property.

Last but not least, the main routine simply retrieves the FaceId for the baseline picture, and then enters an infinite loop, running the code picture that compares faces every 2 seconds. Any errors will be printed out. You should note that since a FaceId is only valid for 24 hours, this code will fail after 24 hours; that’s because the baseline FaceId will become invalid. A minor enhancement to this code could be made to obtain a new FaceId for the baseline daily.

import http.client, json
from picamera import PiCamera
from time import sleep
import sys 
# Cognitive settings 
subscription_key = 'ENTER_THE_AZURE_COGNITIVE_KEY'
uri_base = ''
analyze_uri = "/vision/v1.0/analyze?%s"
face_detect_uri = "/face/v1.0/detect?returnFaceId=true"
face_verify_uri = "/face/v1.0/verify"
base_face_id = ""
base_face_file = '/home/pi/test.jpg'  # this file was created from the camera test
# other settings 
fileName = "/home/pi/enzo/image.jpg"  # this file is created every few seconds 
headers = dict() 
headers['Ocp-Apim-Subscription-Key'] = subscription_key 
headers['Content-Type'] = "application/octet-stream"
headers_appjson = dict() 
headers_appjson['Ocp-Apim-Subscription-Key'] = subscription_key 
headers_appjson['Content-Type'] = "application/json"
lastValue = False   
camera = PiCamera()   
def capturePicture(fipathToFileInDiskle):   
def getFaceId(pathToFileInDisk, headers):   
    with open( pathToFileInDisk, "rb" ) as f:   
        inputdata =   
    body = inputdata   
    faceId = ""   
        conn = http.client.HTTPSConnection(uri_base)   
        conn.request("POST", face_detect_uri, body, headers)   
        response = conn.getresponse()   
        data ='utf-8')                  
        parsed = json.loads(data)   
        if (len(parsed) > 0):   
            print (parsed)   
            faceId = parsed[0]['faceId']   
    except Exception as e:   
    return faceId   
def compareFaces(faceId1, faceId2):   
    identical = False   
        body = '{ "faceId1": "' + faceId1 + '", "faceId2": "' + faceId2 + '" }'   
        conn = http.client.HTTPSConnection(uri_base)   
        conn.request("POST", face_verify_uri, body, headers_appjson)   
        response = conn.getresponse()   
        data ='utf-8')   
        parsed = json.loads(data)   
        identical = parsed['isIdentical']   
    except Exception as e:   
    return identical   
# Main code starts here 
# Get the face id of the base image - this faceId is valid for 24 hours 
faceId1 = getFaceId(base_face_file, headers)   
while True:   
        print("calling camera...")   
        print("calling Azure Cognitive service...")   
        faceId2 = getFaceId(fileName, headers)   
        if (len(faceId2) > 0):   
            isSame = compareFaces(faceId1, faceId2)   
            if isSame:   
                # Same face detected... send the message   
                print("SAME FACE DETECTED")   
        print("Error:", sys.exc_info()[0])   

Note. The sample code provided was built and tested on a RaspberryPi; however, this code should run on any platform that has a connected camera. The code is part of a larger lab, with full instructions, that was created and posted here.