In Focus

Xamarin.Android - Face Detection Using Computer Vision Face API

In this post, we will focus on the Microsoft Cognitive Services Face API. Specifically, we'll look at what you can do with the Face API and then, we will run through an example of it.

Introduction

In the last couple of Cognitive Services posts, we focused on Microsoft Computer Vision API and its uses. In this post, we will focus on the Microsoft Cognitive Services Face API. We will see what we can do with the Face API with examples of it.

What’s possible with the Face API?

Initially released in 2016, Face API belongs to the Microsoft Cognitive Services set of AI cloud services and gives you access to some of the most advanced face recognition algorithms that are currently in the market. The main features of the Face API are centered around face detection with attributes, face recognition, and emotional recognition which allow you to -

  • Identify the underlying emotion for each face in an image (anger, content, fear, happiness etc.)
  • Identify previously tagged people using their faces
  • Group faces based on similarity
  • Detect human faces
  • Find similar faces

Face Detection

This feature of the API allows you to detect “face rectangles” which are accompanied by rich sets of attributes that include but are not limited to- Age, Emotion, Gender, Pose, Smile, and Facial Hair.

In this image, we can see the API has identified the face and highlighted it.
 
 

Prerequisites

  • Computer Vision Face API Key
  • Microsoft.Net.Http
  • Newtonsoft.Json
Step 1 - Create an Android Project
 
Create your Android Solution in Visual Studio or Xamarin Studio. Select Android and from the list, choose Android Blank App. Give it a name, like FaceDetection.

(ProjectName: FaceDetection)

Step 2 - Add References of NuGet Packages

First of all, in References, add the references to Microsoft.Net.Http and Newtonsoft.Json using NuGet Package Manager, as shown below.
 
Xamarin.Android - Analyze Image Using Cognitive Services
 
Step 3 - User Interface
 
Open Solution Explorer-> Project Name-> Resources-> Layout-> activity_main.axml and add the following code.
(FileName: activity_main.axml)
  1. <?xml version="1.0" encoding="utf-8"?>  
  2. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  
  3.     xmlns:app="http://schemas.android.com/apk/res-auto"  
  4.     xmlns:tools="http://schemas.android.com/tools"  
  5.     android:layout_width="match_parent"  
  6.     android:layout_height="match_parent"  
  7.     android:padding="10dp"  
  8.     android:orientation="vertical">  
  9.     <LinearLayout  
  10.         android:orientation="vertical"  
  11.         android:layout_width="match_parent"  
  12.         android:layout_height="wrap_content"  
  13.         android:padding="8dp">  
  14.         <ImageView  
  15.             android:id="@+id/imgView"  
  16.             android:layout_width="350dp"  
  17.             android:layout_height="350dp" />  
  18.         <ProgressBar  
  19.             android:layout_width="match_parent"  
  20.             android:layout_height="wrap_content"  
  21.             android:id="@+id/progressBar"  
  22.             android:visibility="invisible" />  
  23.         <Button  
  24.             android:id="@+id/btnAnalyze"  
  25.             android:text="Analyze"  
  26.             android:layout_width="match_parent"  
  27.             android:layout_height="wrap_content" />  
  28.         <TextView  
  29.             android:id="@+id/txtGender"  
  30.             android:text="Gender:"  
  31.             android:layout_width="match_parent"  
  32.             android:layout_height="match_parent"/>  
  33.     <TextView  
  34.             android:id="@+id/txtAge"  
  35.             android:text="Age:"  
  36.             android:layout_width="match_parent"  
  37.             android:layout_height="match_parent"/>  
  38.     <TextView  
  39.             android:id="@+id/txtFaces"  
  40.             android:text="Glasses:"  
  41.             android:layout_width="match_parent"  
  42.             android:layout_height="match_parent"/>  
  43.     </LinearLayout>  
  44. </LinearLayout>  
Step 4 - Analysis Model Class
 
Add a new class to your project with the name AnalysisModel.cs. Add the following properties to get the result set from JSON response with an appropriate namespace.
 
(FileName: AnalysisModel.cs)
  1. using System.Collections.Generic;  
  2.   
  3. namespace FaceDetection  
  4. {  
  5.     public class AnalysisModel  
  6.     {  
  7.         public string faceId { getset; }  
  8.         public FaceRectangle faceRectangle { getset; }  
  9.         public FaceAttributes faceAttributes { getset; }  
  10.     }  
  11.     public class FaceRectangle  
  12.     {  
  13.         public int top { getset; }  
  14.         public int left { getset; }  
  15.         public int width { getset; }  
  16.         public int height { getset; }  
  17.     }  
  18.     public class Emotion  
  19.     {  
  20.         public double anger { getset; }  
  21.         public double contempt { getset; }  
  22.         public double disgust { getset; }  
  23.         public double fear { getset; }  
  24.         public double happiness { getset; }  
  25.         public double neutral { getset; }  
  26.         public double sadness { getset; }  
  27.         public double surprise { getset; }  
  28.     }  
  29.     public class FaceAttributes  
  30.     {  
  31.         public double smile { getset; }  
  32.         public string gender { getset; }  
  33.         public double age { getset; }  
  34.         public string glasses { getset; }  
  35.         public Emotion emotion { getset; }  
  36.     }  
  37. }  
Step 5 - Backend Code
 
Let's go to Solution Explorer-> Project Name-> MainActivity and add the following code with appropriate namespaces.
 
Note
Please replace your subscription key and your selected region address in the MainActivity class.
 
(FileName: MainActivity)
  1. using Android.App;  
  2. using Android.Graphics;  
  3. using Android.OS;  
  4. using Android.Support.V7.App;  
  5. using Android.Views;  
  6. using Android.Widget;  
  7. using Newtonsoft.Json;  
  8. using System;  
  9. using System.Collections.Generic;  
  10. using System.IO;  
  11. using System.Net.Http;  
  12. using System.Net.Http.Headers;  
  13. using System.Threading.Tasks;  
  14.   
  15. namespace FaceDetection
  16. {  
  17.     [Activity(Label = "@string/app_name", Theme = "@style/AppTheme", MainLauncher = true)]  
  18.     public class MainActivity : AppCompatActivity  
  19.     {  
  20.         const string subscriptionKey = "202631d0172a48d08f21b52bc22ee860";  
  21.         const string uriBase = "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect";  
  22.         Bitmap mBitmap;  
  23.         private ImageView imageView;  
  24.         private ProgressBar progressBar;  
  25.         ByteArrayContent content;  
  26.         private TextView txtAge , txtGender,txtFaces;  
  27.         Button btnAnalyze;  
  28.         protected override void OnCreate(Bundle savedInstanceState)  
  29.         {  
  30.             base.OnCreate(savedInstanceState);  
  31.             // Set our view from the "main" layout resource  
  32.             SetContentView(Resource.Layout.activity_main);  
  33.             mBitmap = BitmapFactory.DecodeResource(Resources, Resource.Drawable.newAhsan);  
  34.             imageView = FindViewById<ImageView>(Resource.Id.imgView);  
  35.             imageView.SetImageBitmap(mBitmap);  
  36.             txtGender = FindViewById<TextView>(Resource.Id.txtGender);  
  37.             txtAge = FindViewById<TextView>(Resource.Id.txtAge);  
  38.             txtFaces = FindViewById<TextView>(Resource.Id.txtFaces);  
  39.             progressBar = FindViewById<ProgressBar>(Resource.Id.progressBar);  
  40.             btnAnalyze = FindViewById<Button>(Resource.Id.btnAnalyze);  
  41.             byte[] bitmapData;  
  42.             using (var stream = new MemoryStream())  
  43.             {  
  44.                 mBitmap.Compress(Bitmap.CompressFormat.Jpeg, 100, stream);  
  45.                 bitmapData = stream.ToArray();  
  46.             }  
  47.             content = new ByteArrayContent(bitmapData);  
  48.   
  49.             btnAnalyze.Click += async delegate  
  50.             {  
  51.                 busy();  
  52.                await MakeAnalysisRequest(content);  
  53.             };             
  54.     }  
  55.         public async Task MakeAnalysisRequest(ByteArrayContent content)  
  56.         {  
  57.             try  
  58.             {  
  59.                 HttpClient client = new HttpClient();  
  60.                 // Request headers.  
  61.                 client.DefaultRequestHeaders.Add(  
  62.                 "Ocp-Apim-Subscription-Key", subscriptionKey);  
  63.   
  64.                 string requestParameters = "returnFaceId=true&returnfaceRectangle=true" +  
  65.                                           "&returnFaceAttributes=age,gender,smile,glasses";  
  66.                 // Assemble the URI for the REST API method.  
  67.                 string uri = uriBase + "?" + requestParameters;  
  68.   
  69.                 content.Headers.ContentType =  
  70.                     new MediaTypeHeaderValue("application/octet-stream");  
  71.                 // Asynchronously call the REST API method.  
  72.                 var response = await client.PostAsync(uri, content);  
  73.                 // Asynchronously get the JSON response.  
  74.                 string contentString = await response.Content.ReadAsStringAsync();  
  75.   
  76.                 var faces = JsonConvert.DeserializeObject<List<AnalysisModel>>(contentString);  
  77.                 Toast.MakeText(this, faces.Count.ToString()+ " Face Detected" , ToastLength.Short).Show();  
  78.                 NotBusy();  
  79.                 var newbitmap = DrawRectanglesOnBitmap(mBitmap, faces);  
  80.                 imageView.SetImageBitmap(newbitmap);  
  81.                 txtGender.Text = "Gender:  " + faces[0].faceAttributes.gender.ToString();  
  82.                 txtAge.Text ="Age:  " + faces[0].faceAttributes.age.ToString();  
  83.                 txtFaces.Text = "Glasses:  " + faces[0].faceAttributes.glasses.ToString();  
  84.             }  
  85.             catch (Exception e)  
  86.             {  
  87.                 Toast.MakeText(this"" + e.ToString(), ToastLength.Short).Show();  
  88.             }  
  89.         }  
  90.         //DrawRectangle
  91.         private Bitmap DrawRectanglesOnBitmap(Bitmap mybitmap, List<AnalysisModel> faces)  
  92.         {  
  93.             Bitmap bitmap = mybitmap.Copy(Bitmap.Config.Argb8888, true);  
  94.             Canvas canvas = new Canvas(bitmap);  
  95.             Paint paint = new Paint();  
  96.             paint.AntiAlias = true;  
  97.             paint.SetStyle(Paint.Style.Stroke);  
  98.             paint.Color = Color.DodgerBlue;  
  99.             paint.StrokeWidth = 12;  
  100.             foreach(var face in faces)  
  101.             {  
  102.                 var faceRectangle = face.faceRectangle;  
  103.                 canvas.DrawRect(faceRectangle.left,  
  104.                     faceRectangle.top,  
  105.                     faceRectangle.left + faceRectangle.width,  
  106.                     faceRectangle.top + faceRectangle.height, paint);  
  107.             }  
  108.             return bitmap;  
  109.         }  
  110.         void busy()  
  111.         {  
  112.             progressBar.Visibility = ViewStates.Visible;  
  113.             btnAnalyze.Enabled = false;  
  114.         }  
  115.         void NotBusy()  
  116.         {  
  117.             progressBar.Visibility = ViewStates.Invisible;  
  118.             btnAnalyze.Enabled = true;  
  119.         }  
  120.     }  
  121. }  
Group Image Face's Detection
 
 
Single Face Detection
 
 
 

Summary

In this post, we looked into the details of Face API that belongs to the Microsoft Cognitive Services stable of products. We explored some of the key features of the API and how easy it is to consume the API endpoint. Have you used the Face API in any of your applications?

Catch up on previous Cognitive Services posts here.