How To Detect Human Faces Using Cognitive Services

In this article, we will be developing a Xamarin.Forms application for human face detection using Cognitive Services. This is how it will work.
 
First, the camera will open and take a snapshot. The cognitive services will start detecting the face, age, gender, and hair. We’ll also use some “NuGet Packages” for face detection and camera initiation.
 

What is Cognitive Service?

 
In the smart world today, we need our applications to be more intelligent and exciting so that we can attract the users toward our applications. For this purpose, we’ll use Cognitive Services.
 
Microsoft has provided us with some really useful Cognitive Services which will make our application more intelligent in making smart decisions. These services are called “Azure Cognitive Services”. These are the game changers which change the way we make our application more intelligent, by just using cognitive services and a few lines of code.
 

What are NuGet Packages

 
NuGet is a free and open-source package manager designed for the Microsoft development platforms (formally known as NuPack). Since its introduction in 2010, NuGet has evolved into the largest ecosystem and services. NuGet is distributed as a Visual Studio extension.
 
For face detection, we’ll use two types of NuGet Packages.
  1. Plugin.Media.
  2. Microsoft.ProjectOxford.Face.
“Xam.Plugin.Media” will be used for opening the camera while “Torutek.Microsoft.ProjectOxford.Face” will be used for face detection purpose.
 

Steps of building a face detection application 

First of all, we need a subscription key which is defined below and is free for testing phase.
Subscription key
 
Subscription key and its path -
We have to consume the REST API in Xamarin.Forms by using the Vision NuGet Package.
Now we’ll install the following packages.
  • Plugin.Media.
  • Microsoft.ProjectOxford.Face.
  • soft.json.
  • net.http.
After installing packages, it’s time to write the few lines of code.
 
Add the following lines in your XAML file.
  1. <StackLayout>  
  2.         <Label Text="Upload file to Server" HorizontalOptions="Center" TextColor="Black" FontSize="36"></Label>  
  3.         <Button Text="Take Photo" BackgroundColor="Navy" TextColor="White" FontSize="40" x:Name="TakePhoto" Clicked="TakePhoto_Clicked"></Button>  
  4.         <Image x:Name="FileImage1" WidthRequest="400" HeightRequest="220"></Image>  
  5.         <Label  x:Name="Ageget" FontSize="18" TextColor="Black"></Label>  
  6.         <Label  x:Name="Hairget" FontSize="18" TextColor="Black"></Label>  
  7.         <Label  x:Name="Genderget" FontSize="18" TextColor="Black"></Label>  
  8. </StackLayout>  
After that, type the following code in the click event button, “Take Photo”.
  1. private async void TakePhoto_Clicked(object sender, EventArgs e)  
  2.         {  
  3.             var media = Plugin.Media.CrossMedia.Current;  
  4.             await media.Initialize();  
  5.             await Task.Delay(1000);  
  6.             var file = await media.TakePhotoAsync(new StoreCameraMediaOptions  
  7.             {  
  8.                 SaveToAlbum = false  
  9.             });  
  10.   
  11.             if (file != null)  
  12.             {  
  13.                 var faceServiceClient = new FaceServiceClient("f805d74cc5e846c9b5b0b1e904ea3946""https://westcentralus.api.cognitive.microsoft.com/face/v1.0");  
  14.                 var faceAttributes = new FaceAttributeType[] { FaceAttributeType.Age, FaceAttributeType.Gender, FaceAttributeType.Hair };  
  15.                 FileImage1.Source = ImageSource.FromStream(() => file.GetStream());  
  16.                 Face[] faces = await faceServiceClient.DetectAsync(file.GetStream(), truefalse, faceAttributes);  
  17.                 if (faces.Any())  
  18.                 {  
  19.                     Ageget.Text = faces.FirstOrDefault().FaceAttributes.Age.ToString();  
  20.                     Genderget.Text = faces.FirstOrDefault().FaceAttributes.Gender.ToString();  
  21.                     Hairget.Text = faces.FirstOrDefault().FaceAttributes.Hair.Bald.ToString();  
  22.                 }  
  23.             }  
This coding is used for initializing the camera and for accessing the camera of the device. When we click on “Take Photo”, the camera will start. If we take the photo, it will appear in the <image> tag and after detecting the face, the information of your age, gender, and hair also will appear in the <label> tags.
 
Output
 
How To Detect Human Faces Using Cognitive Services


Similar Articles