Detecting Faces With The Azure Face API Using Python

In this article, I will show you how you can build an application in Python that can detect and compare faces using the Microsoft Azure Face API. The Microsoft Face API is a part of the larger Azure Cognitive Service hosted on Microsoft Azure. As a result, you will need to have an Azure account if you would like to try the code provided.
Detecting Faces With The Azure Face API Using Python
 

Approach

 
The solution uses Python programming language to make a call to the Azure Face API. The call in itself is rather simple; you just need to send an HTTP request to the service. In the code, the service I use is hosted in the SouthWest region, so the URI to my Face API service is southcentralus.api.cognitive.microsoft.com – make sure to use the right service API URI.
 
Another important detail about the approach of this demo is that the code relies on an initial picture taken with the camera, and saved to disk in a specific location. This was done to simplify the code. As a result, you will need to first take a picture of the face you want to recognize before running the code. If you run the code on a Raspbian operating system, use this command to take a picture,
 
raspistill -v -o test.jpg
 
The baseline picture will be saved in a file called test.jpg. By default the code expects this file to be located in the /home/pi/ subdirectory; simply change the base_face_file variable if you are storing the file elsewhere.
 
Last but not least, make sure the baseline picture you take has proper lighting and is right-side up. Most image recognition failures are due to improper alignment of the camera or poor lighting conditions.
 

Code

 
Let’s do this. The first method of the code is called capturePicture(). The purpose of this method is simple: take a picture through the camera and save to disk. 
 
The next method, getFaceId() calls the Azure Face API to retrieve a unique identifier for the last picture taken. This is a necessary preliminary step; the Azure Face API keeps a list of recent pictures and indexes them for up to 24 hours. If no face is detected, the method returns an empty string. To make the HTTP call, two headers are required,
 Ocp-Apim-Subscription-Key and Content-Type. The first one is your Cognitive Service API key, and the second should be "application/octet-stream". The body of the request is simply the content of the image as binary.
 
The third method, compareFaces(), sends two FaceId values to the same Face API service and returns whether or not they are identical. As the program starts, it automatically requests the FaceId of the picture initially taken, saved in the test.jpg file. The other FaceId is the one from the last picture taken. The HTTP headers are similar for this call, except that the Content-Type header should be set to "application/json". The call returns a JSON payload; the Python code simply reads the value of the isIdentical property.
 
Last but not least, the main routine simply retrieves the FaceId for the baseline picture, and then enters an infinite loop, running the code picture that compares faces every 2 seconds. Any errors will be printed out. You should note that since a FaceId is only valid for 24 hours, this code will fail after 24 hours; that’s because the baseline FaceId will become invalid. A minor enhancement to this code could be made to obtain a new FaceId for the baseline daily. 
  1. import http.client, json  
  2. from picamera import PiCamera  
  3. from time import sleep  
  4. import sys   
  5.  
  6. #cognitive settings  
  7. subscription_key = 'ENTER_THE_AZURE_COGNITIVE_KEY'  
  8. uri_base = 'southcentralus.api.cognitive.microsoft.com'  
  9. analyze_uri = "/vision/v1.0/analyze?%s"  
  10.   
  11. face_detect_uri = "/face/v1.0/detect?returnFaceId=true"  
  12. face_verify_uri = "/face/v1.0/verify"  
  13. base_face_id = ""  
  14. base_face_file = '/home/pi/test.jpg'  # this file was created from the camera test  
  15.  
  16. #other settings  
  17. fileName = "/home/pi/enzo/image.jpg"  # this file is created every few seconds  
  18. headers = dict()  
  19. headers['Ocp-Apim-Subscription-Key'] = subscription_key  
  20. headers['Content-Type'] = "application/octet-stream"  
  21.   
  22. headers_appjson = dict()  
  23. headers_appjson['Ocp-Apim-Subscription-Key'] = subscription_key  
  24. headers_appjson['Content-Type'] = "application/json"  
  25.   
  26. lastValue = False   
  27. camera = PiCamera()  
  28.   
  29. def capturePicture(fipathToFileInDiskle):  
  30.     camera.capture(fipathToFileInDiskle)   
  31.  
  32. #METHOD THAT RETURNS A FACEID FROM AN IMAGE STORED ON DISK  
  33. def getFaceId(pathToFileInDisk, headers):  
  34.       
  35.     with open( pathToFileInDisk, "rb" ) as f:  
  36.         inputdata = f.read()  
  37.     body = inputdata  
  38.     faceId = ""  
  39.   
  40.     try:  
  41.         conn = http.client.HTTPSConnection(uri_base)  
  42.         conn.request("POST", face_detect_uri, body, headers)  
  43.         response = conn.getresponse()  
  44.         data = response.read().decode('utf-8')                   
  45.         #print(data)  
  46.         parsed = json.loads(data)  
  47.           
  48.         if (len(parsed) > 0):  
  49.             print (parsed)  
  50.             faceId = parsed[0]['faceId']  
  51.             print(faceId)  
  52.         conn.close()   
  53.   
  54.     except Exception as e:   
  55.             print('Error:')  
  56.             print(e)   
  57.   
  58.     return faceId   
  59.   
  60. def compareFaces(faceId1, faceId2):  
  61.   
  62.     identical = False  
  63.       
  64.     try:  
  65.         body = '{ "faceId1": "' + faceId1 + '", "faceId2": "' + faceId2 + '" }'  
  66.         print(body)  
  67.           
  68.         conn = http.client.HTTPSConnection(uri_base)  
  69.         conn.request("POST", face_verify_uri, body, headers_appjson)  
  70.         response = conn.getresponse()  
  71.         data = response.read().decode('utf-8')  
  72.         print(data)  
  73.         parsed = json.loads(data)  
  74.         identical = parsed['isIdentical']  
  75.           
  76.         conn.close()   
  77.   
  78.     except Exception as e:   
  79.             print('Error:')  
  80.             print(e)   
  81.   
  82.     return identical   
  83.  
  84. # Main code starts here print('starting...')   
  85. print('starting...')  
  86.  
  87. # Get the face id of the base image - this faceId is valid for 24 hours  
  88. faceId1 = getFaceId(base_face_file, headers)  
  89.   
  90. while True:  
  91.     try:  
  92.         print("calling camera...")  
  93.         capturePicture(fileName)  
  94.         print("calling Azure Cognitive service...")  
  95.         faceId2 = getFaceId(fileName, headers)  
  96.         if (len(faceId2) > 0):  
  97.             isSame = compareFaces(faceId1, faceId2)  
  98.             if isSame:  
  99.                 # Same face detected... send the message  
  100.                 print("SAME FACE DETECTED")   
  101.     except:  
  102.         print("Error:", sys.exc_info()[0])   
  103.   
  104.     sleep(2)   
NOTE
The sample code provided was built and tested on a RaspberryPi; however, this code should run on any platform that has a connected camera. The code is part of a larger lab, with full instructions, that was created and posted here.