Face API Using ASP.NET MVC And AngularJS

Contents Focused On

  • What is Cognitive Services?
  • What is Face API?
  • Sign Up for Face API
  • Create ASP.Net MVC Sample Application

    • Add AngularJS
    • Install & Configure the Face API

  • Upload images to detect faces
  • Mark faces in the image
  • List detected faces with face information
  • Summary

What is Cognitive Services (Project Oxford)

Microsoft Cognitive Services, formerly known as Project Oxford, is a set of machine-learning application programming interfaces (REST APIs), SDKs, and services that helps developers to make smarter application by add intelligent features – such as, emotion and video detection; facial, speech, and vision recognition; and speech and language understanding.

Get more details from website.

What is Face API?

In Microsoft Cognitive Services, there are four main components.

  1. Face recognition recognizes faces in photos; groups faces that look alike; and verifies whether two faces are the same.
  2. Speech processing recognizes speech and translates it into text, and vice versa.
  3. Visual tools analyze visual content to look for things like inappropriate content or a dominant color scheme, and
  4. Language Understanding Intelligent Service (LUIS) understands what users mean when they say or type something using natural, everyday language.

Get more details from the Microsoft blog. We will implement Face recognition API in our sample application. So what is Face API?

Face API, is a cloud-based service that provides the most advanced face algorithms to detect and recognize human faces in images.

Face API has,

  1. Face Detection
  2. Face Verification
  3. Similar Face Searching
  4. Face Grouping
  5. Face Identification

Get a detailed overview here.

Face Detection

In this post, we are focusing on detecting faces so before we deal with the sample application, let’s take a closer look on API Reference (Face API - V1.0). To enable the services, we need to get an authorization key (API Key) by signing up with the service for free. Go to the link for signup

Sign Up for Face API

Sign up using anyone by clicking on it.

  • Microsoft account
  • GitHub
  • LinkedIn

After successfully joining, it will redirect to subscriptions page. Request new trials for any of the products by selecting the checkbox.

Process: Click on Request new trials > Face - Preview > Agree Term > Subscribe

Here, you can see that I have attached a screenshot of my subscription. In the Keys column, from Key 1, click on “Show” to preview the API Key. Then, click “Copy” to copy the key for further use.

Key can be regenerated by clicking on “Regenerate”.


So far, we are done with the subscription process. Now, let’s get started with the ASP.NET MVC sample application.

Create Sample Application

Before starting the experiment, let’s make sure that Visual Studio 2015 is installed on the development machine.

Now, let’s open Visual Studio 2015. From the File menu, click on New > Project.


Select ASP.NET Web application, name it as you like, I just named it “FaceAPI_MVC”. Click OK button to proceed for next step.

Choose empty template for the sample application, select “MVC” check box, then click OK.


In our empty template, let’s now create MVC Controller and generate views by scaffolding.



Add AngularJS

We need to add packages in our sample application. To do that, go to Solution Explorer and right click on Project > Manage NuGet Package.


In "Package Manager", search by typing “angularjs”, select package, then click "Install".


After installing “angularjs” package, we need to reference it in our layout page. Also, we need to define app root using “ng-app” directive.


If you are new to AngularJS, please get a basic overview on AngularJS with MVC application from here.

Install & Configure the Face API

We need to add “Microsoft.ProjectOxford.Face” library in our sample application. Type and search like below screen, then select and Install.


Web.Config

In application Web.Config, add a new configuration setting in appSettings section with our previously generated API Key.

  1. <add key="FaceServiceKey" value="XXXXXXXXXXXXXXXXXXXXXXXXXXX" />   

Finally, appSettings 

  1. <appSettings>  
  2.   <add key="webpages:Version" value="3.0.0.0" />  
  3.   <add key="webpages:Enabled" value="false" />  
  4.   <add key="PreserveLoginUrl" value="true" />  
  5.   <add key="ClientValidationEnabled" value="true" />  
  6.   <add key="UnobtrusiveJavaScriptEnabled" value="true" />  
  7.   <add key="FaceServiceKey" value="xxxxxxxxxxxxxxxxxxxxxxxxxxx" /> <!--replace with API Key-->  
  8. </appSettings>  

MVC Controller

This is where we are performing our main operation. First of all, get FaceServiceKey Value from web.config by ConfigurationManager.AppSettings.

  1. private static string ServiceKey = ConfigurationManager.AppSettings["FaceServiceKey"];   

Here, in MVC Controller, we have two main methods to perform the face detection operation. One is HttpPost method, which is used for uploading the image file to folder and the other one is HttpGet method, used to get uploaded image and detecting faces by calling API Service.

Both the methods are getting called from client script while uploading image to detect faces. Let’s explain the steps.

Image Upload

This method is responsible for uploading images. 

  1. [HttpPost]  
  2. public JsonResult SaveCandidateFiles()  
  3. {  
  4.     //Create Directory if Not Exist  
  5.     //Requested File Collection  
  6.     //Clear Folder  
  7.     //Save File in Folder  
  8. }  

Image Detect

This method is responsible for detecting the faces from uploaded images. 

  1. [HttpGet]  
  2. public async Task<dynamic> GetDetectedFaces()  
  3. {  
  4.      // Open an existing file for reading  
  5.      // Create Instance of Service Client by passing Servicekey as parameter in constructor  
  6.      // Call detection REST API  
  7.      // Create & Save Cropped Detected Face Images  
  8.      // Convert detection result into UI binding object  
  9. }  

This is the code snippet where the Face API is getting called to detect the face from uploaded image. 

  1. // Open an existing file for reading  
  2.   
  3. var fStream = System.IO.File.OpenRead(FullImgPath)  
  4.   
  5. // Create Instance of Service Client by passing Servicekey as parameter in constructor  
  6.   
  7. var faceServiceClient = new FaceServiceClient(ServiceKey);  
  8.   
  9. // Call detection REST API  
  10.   
  11. Face[] faces = await faceServiceClient.DetectAsync(fStream, truetruenew FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Glasses });  

Create & Save Cropped detected face images.

  1. var croppedImg = Convert.ToString(Guid.NewGuid()) + ".jpeg" as string;  
  2. var croppedImgPath = directory + '/' + croppedImg as string;  
  3. var croppedImgFullPath = Server.MapPath(directory) + '/' + croppedImg as string;  
  4. CroppedFace = CropBitmap(  
  5.               (Bitmap)Image.FromFile(FullImgPath),  
  6.               face.FaceRectangle.Left,  
  7.               face.FaceRectangle.Top,  
  8.               face.FaceRectangle.Width,  
  9.               face.FaceRectangle.Height);  
  10. CroppedFace.Save(croppedImgFullPath, ImageFormat.Jpeg);  
  11. if (CroppedFace != null)  
  12.    ((IDisposable)CroppedFace).Dispose();  

Method that is cropping images according to face values. 

  1. public Bitmap CropBitmap(Bitmap bitmap, int cropX, int cropY, int cropWidth, int cropHeight)  
  2. {  
  3.     // Crop Images  
  4. }  

Finally Full MVC Controller 

  1. public class FaceDetectionController : Controller  
  2. {  
  3.     private static string ServiceKey = ConfigurationManager.AppSettings["FaceServiceKey"];  
  4.     private static string directory = "../UploadedFiles";  
  5.     private static string UplImageName = string.Empty;  
  6.     private ObservableCollection<vmFace> _detectedFaces = new ObservableCollection<vmFace>();  
  7.     private ObservableCollection<vmFace> _resultCollection = new ObservableCollection<vmFace>();  
  8.     public ObservableCollection<vmFace> DetectedFaces  
  9.     {  
  10.         get  
  11.         {  
  12.             return _detectedFaces;  
  13.         }  
  14.     }  
  15.     public ObservableCollection<vmFace> ResultCollection  
  16.     {  
  17.         get  
  18.         {  
  19.             return _resultCollection;  
  20.         }  
  21.     }  
  22.     public int MaxImageSize  
  23.     {  
  24.         get  
  25.         {  
  26.             return 450;  
  27.         }  
  28.     }  
  29.   
  30.     // GET: FaceDetection  
  31.     public ActionResult Index()  
  32.     {  
  33.         return View();  
  34.     }  
  35.   
  36.     [HttpPost]  
  37.     public JsonResult SaveCandidateFiles()  
  38.     {  
  39.         string message = string.Empty, fileName = string.Empty, actualFileName = string.Empty; bool flag = false;  
  40.         //Requested File Collection  
  41.         HttpFileCollection fileRequested = System.Web.HttpContext.Current.Request.Files;  
  42.         if (fileRequested != null)  
  43.         {  
  44.             //Create New Folder  
  45.             CreateDirectory();  
  46.   
  47.             //Clear Existing File in Folder  
  48.             ClearDirectory();  
  49.   
  50.             for (int i = 0; i < fileRequested.Count; i++)  
  51.             {  
  52.                 var file = Request.Files[i];  
  53.                 actualFileName = file.FileName;  
  54.                 fileName = Guid.NewGuid() + Path.GetExtension(file.FileName);  
  55.                 int size = file.ContentLength;  
  56.   
  57.                 try  
  58.                 {  
  59.                     file.SaveAs(Path.Combine(Server.MapPath(directory), fileName));  
  60.                     message = "File uploaded successfully";  
  61.                     UplImageName = fileName;  
  62.                     flag = true;  
  63.                 }  
  64.                 catch (Exception)  
  65.                 {  
  66.                     message = "File upload failed! Please try again";  
  67.                 }  
  68.             }  
  69.         }  
  70.         return new JsonResult  
  71.         {  
  72.             Data = new  
  73.             {  
  74.                 Message = message,  
  75.                 UplImageName = fileName,  
  76.                 Status = flag  
  77.             }  
  78.         };  
  79.     }  
  80.   
  81.     [HttpGet]  
  82.     public async Task<dynamic> GetDetectedFaces()  
  83.     {  
  84.         ResultCollection.Clear();  
  85.         DetectedFaces.Clear();  
  86.   
  87.         var DetectedResultsInText = string.Format("Detecting...");  
  88.         var FullImgPath = Server.MapPath(directory) + '/' + UplImageName as string;  
  89.         var QueryFaceImageUrl = directory + '/' + UplImageName;  
  90.   
  91.         if (UplImageName != "")  
  92.         {  
  93.             //Create New Folder  
  94.             CreateDirectory();  
  95.   
  96.             try  
  97.             {  
  98.                 // Call detection REST API  
  99.                 using (var fStream = System.IO.File.OpenRead(FullImgPath))  
  100.                 {  
  101.                     // User picked one image  
  102.                     var imageInfo = UIHelper.GetImageInfoForRendering(FullImgPath);  
  103.   
  104.                     // Create Instance of Service Client by passing Servicekey as parameter in constructor   
  105.                     var faceServiceClient = new FaceServiceClient(ServiceKey);  
  106.                     Face[] faces = await faceServiceClient.DetectAsync(fStream, truetruenew FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Glasses });  
  107.                     DetectedResultsInText = string.Format("{0} face(s) has been detected!!", faces.Length);  
  108.                     Bitmap CroppedFace = null;  
  109.   
  110.                     foreach (var face in faces)  
  111.                     {  
  112.                         //Create & Save Cropped Images  
  113.                         var croppedImg = Convert.ToString(Guid.NewGuid()) + ".jpeg" as string;  
  114.                         var croppedImgPath = directory + '/' + croppedImg as string;  
  115.                         var croppedImgFullPath = Server.MapPath(directory) + '/' + croppedImg as string;  
  116.                         CroppedFace = CropBitmap(  
  117.                                         (Bitmap)Image.FromFile(FullImgPath),  
  118.                                         face.FaceRectangle.Left,  
  119.                                         face.FaceRectangle.Top,  
  120.                                         face.FaceRectangle.Width,  
  121.                                         face.FaceRectangle.Height);  
  122.                         CroppedFace.Save(croppedImgFullPath, ImageFormat.Jpeg);  
  123.                         if (CroppedFace != null)  
  124.                             ((IDisposable)CroppedFace).Dispose();  
  125.   
  126.   
  127.                         DetectedFaces.Add(new vmFace()  
  128.                         {  
  129.                             ImagePath = FullImgPath,  
  130.                             FileName = croppedImg,  
  131.                             FilePath = croppedImgPath,  
  132.                             Left = face.FaceRectangle.Left,  
  133.                             Top = face.FaceRectangle.Top,  
  134.                             Width = face.FaceRectangle.Width,  
  135.                             Height = face.FaceRectangle.Height,  
  136.                             FaceId = face.FaceId.ToString(),  
  137.                             Gender = face.FaceAttributes.Gender,  
  138.                             Age = string.Format("{0:#} years old", face.FaceAttributes.Age),  
  139.                             IsSmiling = face.FaceAttributes.Smile > 0.0 ? "Smile" : "Not Smile",  
  140.                             Glasses = face.FaceAttributes.Glasses.ToString(),  
  141.                         });  
  142.                     }  
  143.   
  144.                     // Convert detection result into UI binding object for rendering  
  145.                     var rectFaces = UIHelper.CalculateFaceRectangleForRendering(faces, MaxImageSize, imageInfo);  
  146.                     foreach (var face in rectFaces)  
  147.                     {  
  148.                         ResultCollection.Add(face);  
  149.                     }  
  150.                 }  
  151.             }  
  152.             catch (FaceAPIException)  
  153.             {  
  154.                 //do exception work  
  155.             }  
  156.         }  
  157.         return new JsonResult  
  158.         {  
  159.             Data = new  
  160.             {  
  161.                 QueryFaceImage = QueryFaceImageUrl,  
  162.                 MaxImageSize = MaxImageSize,  
  163.                 FaceInfo = DetectedFaces,  
  164.                 FaceRectangles = ResultCollection,  
  165.                 DetectedResults = DetectedResultsInText  
  166.             },  
  167.             JsonRequestBehavior = JsonRequestBehavior.AllowGet  
  168.         };  
  169.     }  
  170.   
  171.     public Bitmap CropBitmap(Bitmap bitmap, int cropX, int cropY, int cropWidth, int cropHeight)  
  172.     {  
  173.         Rectangle rect = new Rectangle(cropX, cropY, cropWidth, cropHeight);  
  174.         Bitmap cropped = bitmap.Clone(rect, bitmap.PixelFormat);  
  175.         return cropped;  
  176.     }  
  177.   
  178.     public void CreateDirectory()  
  179.     {  
  180.         bool exists = System.IO.Directory.Exists(Server.MapPath(directory));  
  181.         if (!exists)  
  182.         {  
  183.             try  
  184.             {  
  185.                 Directory.CreateDirectory(Server.MapPath(directory));  
  186.             }  
  187.             catch (Exception ex)  
  188.             {  
  189.                 ex.ToString();  
  190.             }  
  191.         }  
  192.     }  
  193.   
  194.     public void ClearDirectory()  
  195.     {  
  196.         DirectoryInfo dir = new DirectoryInfo(Path.Combine(Server.MapPath(directory)));  
  197.         var files = dir.GetFiles();  
  198.         if (files.Length > 0)  
  199.         {  
  200.             try  
  201.             {  
  202.                 foreach (FileInfo fi in dir.GetFiles())  
  203.                 {  
  204.                     GC.Collect();  
  205.                     GC.WaitForPendingFinalizers();  
  206.                     fi.Delete();  
  207.                 }  
  208.             }  
  209.             catch (Exception ex)  
  210.             {  
  211.                 ex.ToString();  
  212.             }  
  213.         }  
  214.     }  
  215. }   

UI Helper 

  1. /// <summary>  
  2. /// UI helper functions  
  3. /// </summary>  
  4. internal static class UIHelper  
  5. {  
  6.     #region Methods  
  7.   
  8.     /// <summary>  
  9.     /// Calculate the rendering face rectangle  
  10.     /// </summary>  
  11.     /// <param name="faces">Detected face from service</param>  
  12.     /// <param name="maxSize">Image rendering size</param>  
  13.     /// <param name="imageInfo">Image width and height</param>  
  14.     /// <returns>Face structure for rendering</returns>  
  15.     public static IEnumerable<vmFace> CalculateFaceRectangleForRendering(IEnumerable<Microsoft.ProjectOxford.Face.Contract.Face> faces, int maxSize, Tuple<intint> imageInfo)  
  16.     {  
  17.         var imageWidth = imageInfo.Item1;  
  18.         var imageHeight = imageInfo.Item2;  
  19.         float ratio = (float)imageWidth / imageHeight;  
  20.   
  21.         int uiWidth = 0;  
  22.         int uiHeight = 0;  
  23.   
  24.         uiWidth = maxSize;  
  25.         uiHeight = (int)(maxSize / ratio);  

  26.         float scale = (float)uiWidth / imageWidth;  
  27.   
  28.         foreach (var face in faces)  
  29.         {  
  30.             yield return new vmFace()  
  31.             {  
  32.                 FaceId = face.FaceId.ToString(),  
  33.                 Left = (int)(face.FaceRectangle.Left * scale),  
  34.                 Top = (int)(face.FaceRectangle.Top * scale),  
  35.                 Height = (int)(face.FaceRectangle.Height * scale),  
  36.                 Width = (int)(face.FaceRectangle.Width * scale),  
  37.             };  
  38.         }  
  39.     }  
  40.   
  41.     /// <summary>  
  42.     /// Get image basic information for further rendering usage  
  43.     /// </summary>  
  44.     /// <param name="imageFilePath">Path to the image file</param>  
  45.     /// <returns>Image width and height</returns>  
  46.     public static Tuple<intint> GetImageInfoForRendering(string imageFilePath)  
  47.     {  
  48.         try  
  49.         {  
  50.             using (var s = File.OpenRead(imageFilePath))  
  51.             {  
  52.                 JpegBitmapDecoder decoder = new JpegBitmapDecoder(s, BitmapCreateOptions.None, BitmapCacheOption.None);  
  53.                 var frame = decoder.Frames.First();  
  54.   
  55.                 // Store image width and height for following rendering  
  56.                 return new Tuple<intint>(frame.PixelWidth, frame.PixelHeight);  
  57.             }  
  58.         }  
  59.         catch  
  60.         {  
  61.             return new Tuple<intint>(0, 0);  
  62.         }  
  63.     }  
  64.     #endregion Methods  
  65. }  

MVC View 

  1. @{  
  2.     ViewBag.Title = "Face Detection";  
  3. }  
  4.   
  5. <div ng-controller="faceDetectionCtrl">  
  6.   
  7.     <h3>{{Title}}</h3>  
  8.     <div class="loadmore">  
  9.         <div ng-show="loaderMoreupl" ng-class="result">  
  10.             <img src="~/Content/ng-loader.gif" /> {{uplMessage}}  
  11.         </div>  
  12.     </div>  
  13.     <div class="clearfix"></div>  
  14.     <table style="width:100%">  
  15.         <tr>  
  16.             <th><h4>Select Query Face</h4></th>  
  17.             <th><h4>Detection Result</h4></th>  
  18.         </tr>  
  19.         <tr>  
  20.             <td style="width:60%" valign="top">  
  21.                 <form novalidate name="f1">  
  22.                     <input type="file" name="file" accept="image/*" onchange="angular.element(this).scope().selectCandidateFileforUpload(this.files)" required />  
  23.                 </form>  
  24.                 <div class="clearfix"></div>  
  25.                 <div class="loadmore">  
  26.                     <div ng-show="loaderMore" ng-class="result">  
  27.                         <img src="~/Content/ng-loader.gif" /> {{faceMessage}}  
  28.                     </div>  
  29.                 </div>  
  30.                 <div class="clearfix"></div>  
  31.                 <div class="facePreview_thumb_big" id="faceCanvas"></div>  
  32.             </td>  
  33.             <td style="width:40%" valign="top">  
  34.                 <p>{{DetectedResultsMessage}}</p>  
  35.                 <hr />  
  36.                 <div class="clearfix"></div>  
  37.                 <div class="facePreview_thumb_small">  
  38.                     <div ng-repeat="item in DetectedFaces" class="col-sm-12">  
  39.                         <div class="col-sm-3">  
  40.                             <img ng-src="{{item.FilePath}}" width="100" />  
  41.                         </div>  
  42.                         <div class="col-sm-8">  
  43.                             <ul>  
  44.                                 @*<li>FaceId: {{item.FaceId}}</li>*@  
  45.                                 <li>Age: {{item.Age}}</li>  
  46.                                 <li>Gender: {{item.Gender}}</li>  
  47.                                 <li>{{item.IsSmiling}}</li>  
  48.                                 <li>{{item.Glasses}}</li>  
  49.                             </ul>  
  50.                         </div>  
  51.                         <div class="clearfix"></div>  
  52.                     </div>  
  53.                 </div>  
  54.                 <div ng-hide="DetectedFaces.length">No face detected!!</div>  
  55.             </td>  
  56.         </tr>  
  57.     </table>  
  58. </div>  
  59.   
  60. @section NgScript{  
  61.     <script src="~/ScriptsNg/FaceDetectionCtrl.js"></script>  
  62. }  

Angular Controller 

  1. angular.module('myFaceApp', [])  
  2. .controller('faceDetectionCtrl'function ($scope, FileUploadService) {  
  3.   
  4.     $scope.Title = 'Microsoft FaceAPI - Face Detection';  
  5.     $scope.DetectedResultsMessage = 'No result found!!';  
  6.     $scope.SelectedFileForUpload = null;  
  7.     $scope.UploadedFiles = [];  
  8.     $scope.SimilarFace = [];  
  9.     $scope.FaceRectangles = [];  
  10.     $scope.DetectedFaces = [];  
  11.   
  12.     //File Select & Save   
  13.     $scope.selectCandidateFileforUpload = function (file) {  
  14.         $scope.SelectedFileForUpload = file;  
  15.         $scope.loaderMoreupl = true;  
  16.         $scope.uplMessage = 'Uploading, please wait....!';  
  17.         $scope.result = "color-red";  
  18.   
  19.         //Save File  
  20.         var uploaderUrl = "/FaceDetection/SaveCandidateFiles";  
  21.         var fileSave = FileUploadService.UploadFile($scope.SelectedFileForUpload, uploaderUrl);  
  22.         fileSave.then(function (response) {  
  23.             if (response.data.Status) {  
  24.                 $scope.GetDetectedFaces();  
  25.                 angular.forEach(angular.element("input[type='file']"), function (inputElem) {  
  26.                     angular.element(inputElem).val(null);  
  27.                 });  
  28.                 $scope.f1.$setPristine();  
  29.                 $scope.uplMessage = response.data.Message;  
  30.                 $scope.loaderMoreupl = false;  
  31.             }  
  32.         },  
  33.         function (error) {  
  34.             console.warn("Error: " + error);  
  35.         });  
  36.     }  
  37.   
  38.     //Get Detected Faces  
  39.     $scope.GetDetectedFaces = function () {  
  40.         $scope.loaderMore = true;  
  41.         $scope.faceMessage = 'Preparing, detecting faces, please wait....!';  
  42.         $scope.result = "color-red";  
  43.   
  44.         var fileUrl = "/FaceDetection/GetDetectedFaces";  
  45.         var fileView = FileUploadService.GetUploadedFile(fileUrl);  
  46.         fileView.then(function (response) {  
  47.             $scope.QueryFace = response.data.QueryFaceImage;  
  48.             $scope.DetectedResultsMessage = response.data.DetectedResults;  
  49.             $scope.DetectedFaces = response.data.FaceInfo;  
  50.             $scope.FaceRectangles = response.data.FaceRectangles;  
  51.             $scope.loaderMore = false;  
  52.   
  53.             //Reset element  
  54.             $('#faceCanvas_img').remove();  
  55.             $('.divRectangle_box').remove();  
  56.   
  57.             //get element byID  
  58.             var canvas = document.getElementById('faceCanvas');  
  59.   
  60.             //add image element  
  61.             var elemImg = document.createElement("img");  
  62.             elemImg.setAttribute("src", $scope.QueryFace);  
  63.             elemImg.setAttribute("width", response.data.MaxImageSize);  
  64.             elemImg.id = 'faceCanvas_img';  
  65.             canvas.append(elemImg);  
  66.   
  67.             //Loop with face rectangles  
  68.             angular.forEach($scope.FaceRectangles, function (imgs, i) {  
  69.                 //console.log($scope.DetectedFaces[i])  
  70.                 //Create rectangle for every face  
  71.                 var divRectangle = document.createElement('div');  
  72.                 var width = imgs.Width;  
  73.                 var height = imgs.Height;  
  74.                 var top = imgs.Top;  
  75.                 var left = imgs.Left;  
  76.   
  77.                 //Style Div  
  78.                 divRectangle.className = 'divRectangle_box';  
  79.                 divRectangle.style.width = width + 'px';  
  80.                 divRectangle.style.height = height + 'px';  
  81.                 divRectangle.style.position = 'absolute';  
  82.                 divRectangle.style.top = top + 'px';  
  83.                 divRectangle.style.left = left + 'px';  
  84.                 divRectangle.style.zIndex = '999';  
  85.                 divRectangle.style.border = '1px solid #fff';  
  86.                 divRectangle.style.margin = '0';  
  87.                 divRectangle.id = 'divRectangle_' + (i + 1);  
  88.   
  89.                 //Generate rectangles  
  90.                 canvas.append(divRectangle);  
  91.             });  
  92.         },  
  93.         function (error) {  
  94.             console.warn("Error: " + error);  
  95.         });  
  96.     };  
  97. })  
  98. .factory('FileUploadService'function ($http, $q) {  
  99.     var fact = {};  
  100.     fact.UploadFile = function (files, uploaderUrl) {  
  101.         var formData = new FormData();  
  102.         angular.forEach(files, function (f, i) {  
  103.             formData.append("file", files[i]);  
  104.         });  
  105.         var request = $http({  
  106.             method: "post",  
  107.             url: uploaderUrl,  
  108.             data: formData,  
  109.             withCredentials: true,  
  110.             headers: { 'Content-Type': undefined },  
  111.             transformRequest: angular.identity  
  112.         });  
  113.         return request;  
  114.     }  
  115.     fact.GetUploadedFile = function (fileUrl) {  
  116.         return $http.get(fileUrl);  
  117.     }  
  118.     return fact;  
  119. })  

Upload images to detect faces

Browse Image from local folder to upload and detect faces.



Mark faces in the image

Detected faces will be marked with white rectangle.


List detected faces with face information

List and separate the faces with detailed face information.

Summary

You have just seen how to call Face API to detect faces in an image. I hope this will help to make the application more smart and intelligent.

References

  • https://www.microsoft.com/cognitive-services/en-us/face-api/documentation/get-started-with-face-api/GettingStartedwithFaceAPIinCSharp
  • https://www.microsoft.com/cognitive-services/en-us/face-api
  • https://staging.www.projectoxford.ai
  • https://github.com/ShashangkaShekhar/ProjectOxford-ClientSDK


Similar Articles