Highlighting Faces In Uploaded Images In ASP.NET Web Applications

Download the source code from MSDN galleries

Introduction and Background

Previously, I was thinking about how we could find the faces in the uploaded image. So, why not create a small module that automatically finds the faces and renders them when we want to load the images on our web pages? That was a pretty easy task but I would love to share what and how I did it. The entire procedure may look a bit complex but trust me, it is really very simple and straight-forward. However, you might be required to know about a few frameworks beforehand as I won’t be covering most of the in-depth stuff of that scenario, such as — computer vision which is used to perform actions, like face detection.

In this post, you will learn the basics of many things that range from:
  1. Performing computer vision operations — the most basic one used for finding the faces in the images.
  2. Sending and receiving content from the web server based on the image data uploaded.
  3. Using the canvas HTML element to render the results.

I won’t be guiding you through each and every step of the computer vision, or the processes that are required to perform the facial detection in the images. For that, I would like to ask you to go and read this post of mine.
Facial biometric authentication on your connected devices.

In this current post, I’ll cover the basics of how to detect the faces in an ASP.NET web application, how to pass the characters of the faces in the images, how to use those properties, and how to render the faces on the image in the canvas.

Making the web app face-aware

There are two steps needed in order to make our web applications face aware and to determine whether there is a face in the uploaded image or not. There are many uses of this technique which I will enlist in the final words section below.
The first step is to configure our web application to be able to consume the image and then render the image for processing. Our image processing toolkit would allow us to find the faces and the locations of our faces. This part would, then, forward the request to the client-side where our client itself would render the face locations on the images.

In this sample, I am going to use a canvas element to draw objects whereas this can be done using multiple div containers to contain span elements. And, they can be rendered over the actual image to show the face boxes with their positions set to absolute.

First of all, let us program the ASP.NET web application to get the image, process the image, find the faces, and generate the response to be collected on the client-side.

Programming file processing part

On the server side, we would preferably use the Emgu CV library. This library has been of a great use in the C# wrappers list of OpenCV library. I will be using the same library to program the face detectors in ASP.NET. The benefits are:
  1. It is a very light-weight library.
  2. The entire processing can take less than a second or two, and the views would be generated in a second later.
  3. It is better than most of other computer vision libraries; as it is based on OpenCV.

First, we create a new Controller in our web application that would handle the requests for this purpose. We would later add the POST method handler to the controller action to upload and process the image. You can create any controller. I named it “FindFacesController” in my own application. To create a new Controller, follow:

Right click Controllers folder → Select Add → Select Controller…, to add a new controller.
Add the desired name to it and proceed.
By default, this controller is given an action, Index. A folder with the same name is created in the Views folder.
First of all, open the Views folder to add the HTML content for which we would later write the back-end part. In this example project, we need to use an HTML form where users would be able to upload the files to the servers for processing.

The following HTML snippet would do this,
  1. <form method="post" enctype="multipart/form-data" id="form">  
  2.    <input type="file" name="image" id="image" onchange="this.form.submit()" />  
  3. </form>  
You can see that the HTML form is enough itself. There is a special event handler attached to this input element which would enable the form to automatically submit once the user selects the image. That is because we only want to process one image at a time. I could have written a standalone function but that would have made no sense. This inline function call is a better way to do this.

Now, for the ASP.NET part, I will be using the HttpMethod property of the Request to determine if the request was to upload the image or to just load the page.
  1. if(Request.HttpMethod == "POST") {  
  2.    // Image upload code here.  
  3. }  
Now, before I actually write the code, I want to show and explain what we want to do in this example. The steps to be performed are, as shown below:
  1. We need to save the image that was uploaded in the request.
  2. We would then get the file that was uploaded, and process that image using Emgu CV.
  3. We would get the locations of the faces in the image and then serialize them to JSON string, using Json.NET library.
  4. Later part would be taken care of on the client-side using JavaScript code.

Before I actually write the code, let me first show you the helper objects that I have created. I needed two helper objects. One for storing the location of the faces and the other one to perform the facial detection in the images.

  1. public class Location {  
  2.     public double X {  
  3.         get;  
  4.         set;  
  5.     }  
  6.     public double Y {  
  7.         get;  
  8.         set;  
  9.     }  
  10.     public double Width {  
  11.         get;  
  12.         set;  
  13.     }  
  14.     public double Height {  
  15.         get;  
  16.         set;  
  17.     }  
  18. }  
  19. // Face detector helper object  
  20. public class FaceDetector {  
  21.     public static List < Rectangle > DetectFaces(Mat image) {  
  22.         List < Rectangle > faces = new List < Rectangle > ();  
  23.         var facesCascade = HttpContext.Current.Server.MapPath("~/haarcascade_frontalface_default.xml");  
  24.         using(CascadeClassifier face = new CascadeClassifier(facesCascade)) {  
  25.             using(UMat ugray = new UMat()) {  
  26.                 CvInvoke.CvtColor(image, ugray, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);  
  27.                 //normalizes brightness and increases contrast of the image  
  28.                 CvInvoke.EqualizeHist(ugray, ugray);  
  29.                 //Detect the faces from the gray scale image and store the locations as rectangle  
  30.                 //The first dimensional is the channel  
  31.                 //The second dimension is the index of the rectangle in the specific channel  
  32.                 Rectangle[] facesDetected = face.DetectMultiScale(ugray, 1.1, 10, new Size(20, 20));  
  33.                 faces.AddRange(facesDetected);  
  34.             }  
  35.         }  
  36.         return faces;  
  37.     }  
  38. }  
These two objects would be used - One for the processing and the other one for the client-side code to render the boxes on the faces. The action code that I used for this is, as given below:
  1. public ActionResult Index() {  
  2.     if (Request.HttpMethod == "POST") {  
  3.         ViewBag.ImageProcessed = true;  
  4.         // Try to process the image.  
  5.         if (Request.Files.Count > 0) {  
  6.             // There will be just one file.  
  7.             var file = Request.Files[0];  
  8.             var fileName = Guid.NewGuid().ToString() + ".jpg";  
  9.             file.SaveAs(Server.MapPath("~/Images/" + fileName));  
  10.             // Load the saved image, for native processing using Emgu CV.  
  11.             var bitmap = new Bitmap(Server.MapPath("~/Images/" + fileName));  
  12.             var faces = FaceDetector.DetectFaces(new Image < Bgr, byte > (bitmap).Mat);  
  13.             // If faces where found.  
  14.             if (faces.Count > 0) {  
  15.                 ViewBag.FacesDetected = true;  
  16.                 ViewBag.FaceCount = faces.Count;  
  17.                 var positions = new List < Location > ();  
  18.                 foreach(var face in faces) {  
  19.                     // Add the positions.  
  20.                     positions.Add(new Location {  
  21.                         X = face.X,  
  22.                             Y = face.Y,  
  23.                             Width = face.Width,  
  24.                             Height = face.Height  
  25.                     });  
  26.                 }  
  27.                 ViewBag.FacePositions = JsonConvert.SerializeObject(positions);  
  28.             }  
  29.             ViewBag.ImageUrl = fileName;  
  30.         }  
  31.     }  
  32.     return View();  
  33. }  
The code above does entire processing of the images that we upload to the server. This code is responsible for processing the images, finding, and detecting the faces, and then returning the results for the Views to be rendered in HTML.

Programming client-side canvas elements

You can create a sense of opening a modal popup to show the faces in the images. I used the canvas element on the page itself because I just wanted to demonstrate the usage of this coding technique. As we have seen, the controller action would generate a few ViewBag properties that we can later use in the HTML content to render the results based on our previous actions.

The View content is shown below: 
  1. @if(ViewBag.ImageProcessed == true) {  
  2.     // Show the image.  
  3.     if (ViewBag.FacesDetected == true) {  
  4.         // Show the image here.  
  5.         < img src = "~/Images/@ViewBag.ImageUrl"  
  6.         alt = "Image"  
  7.         id = "imageElement"  
  8.         style = "display: none; height: 0; width: 0;" / > < p > < b > @ViewBag.FaceCount < /b> @if (ViewBag.FaceCount == 1) { <text><b>face</b > was < /text> } else { <text><b>faces</b > were < /text> } detected in the following image.</p > < p > A < code > canvas < /code> element is being used to render the image and then rectangles are being drawn on the top of that canvas to highlight the faces in the image.</p > < canvas id = "faceCanvas" > < /canvas>  
  9.             <!-- HTML content has been loaded, run the script now. -->  
  10.             // Get the canvas.  
  11.         var canvas = document.getElementById("faceCanvas");  
  12.         var img = document.getElementById("imageElement");  
  13.         canvas.height = img.height;  
  14.         canvas.width = img.width;  
  15.         var myCanvas = canvas.getContext("2d");  
  16.         myCanvas.drawImage(img, 0, 0);  
  17.         @if(ViewBag.ImageProcessed == true && ViewBag.FacesDetected == true) {  
  18.             img.style.display = "none";  
  19.             var facesFound = true;  
  20.             var facePositions = JSON.parse(JSON.stringify(@Html.Raw(ViewBag.FacePositions)));  
  21.         }  
  22.         if (facesFound) {  
  23.             // Move forward.  
  24.             for (face in facePositions) {  
  25.                 // Draw the face.  
  26.                 myCanvas.lineWidth = 2;  
  27.                 myCanvas.strokeStyle = selectColor(face);  
  28.                 console.log(selectColor(face));  
  29.                 myCanvas.strokeRect(facePositions[face]["X"], facePositions[face]["Y"], facePositions[face]["Width"], facePositions[face]["Height"]);  
  30.             }  
  31.         }  
  33.         function selectColor(iteration) {  
  34.             if (iteration == 0) {  
  35.                 iteration = Math.floor(Math.random());  
  36.             }  
  37.             var step = 42.5;  
  38.             var randomNumber = Math.floor(Math.random() * 3);  
  39.             // Select the colors.  
  40.             var red = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);  
  41.             var green = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);  
  42.             var blue = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);  
  43.             // Change the values of rgb, randomly.  
  44.             switch (randomNumber) {  
  45.                 case 0:  
  46.                     red = 0;  
  47.                     break;  
  48.                 case 1:  
  49.                     green = 0;  
  50.                     break;  
  51.                 case 2:  
  52.                     blue = 0;  
  53.                     break;  
  54.             }  
  55.             // Return the string.  
  56.             var rgbString = "rgb(" + red + ", " + green + " ," + blue + ")";  
  57.             return rgbString;  
  58.         }  
  59.     } else { < p > No faces were found in the following image. < /p>  
  60.             // Show the image here.  
  61.             < img src = "~/Images/@ViewBag.ImageUrl"  
  62.         alt = "Image"  
  63.         id = "imageElement" / >  
  64.     }  
  65. }  
This code is the client-side code and would be executed only if there is an upload of an image previously. Now, let us review what our application is capable of doing, at the moment.

Running the application for testing

Since we have developed the application, now, it is time to actually run the application to see if that works as expected. The following are the results generated of multiple images that were passed to the server.
The above image shows the default HTML page that is shown to the users when they visit the page for the first time. Then, they will upload the image and the application would process the content of the image that was uploaded. Following images show the results of those images.
I uploaded my image and it found my face. As shown above in bold text, “1 face was detected…”. It also renders the box around the area where the face was detected.


This article would have never been complete, without Eminem being a part of it! Love this guy.

Secondly, I wanted to show how this application processes multiple faces. On the top, see that it shows “5 faces were detected…” and it renders 5 boxes around the areas where faces were detected. I also seem to like the photo as I am a fan of Batman too.
This image shows what happens if the image does not contain a detected face (by detected it means there are many possibilities where a face might not be detected, such as having hairs, wearing glasses etc.) In this image, I just used the three logos of companies and the system told me there were no faces in the image. It also rendered the image but no boxes were made since there were no faces in the image.

Final words

This is it for this post. This method is useful in many facial detection software applications, many areas where you want the users to upload a photo of their faces etc. This is an ASP,NET web application project. That means, you can use this code in your own web applications too. The library usage is also very simple and straight-forward as you have already seen in the article above.

There are other uses too, such as in the cases where you want to perform analysis of peoples’ faces to detect their emotions, locations, and other parameters. 

Similar Articles