Visual Applications and Arduino

My previous articles all dealt with the control of Arduino and their control using a WPF app and Windows Phone. Let's scale up things a bit and let's bring image processing to the stage. When we deal with the human-computer interface, we like to deal with easier ways to interface with our digital counterpart or let's say a system. The article will explore the use of computer vision on an Arduino board.
 
Software required
  1. Microsoft Visual Studio
  2. Arduino IDE 
Hardware required
  1. Arduino board
  2. A webcam
  3. A LED
Libraries used

Introduction

 
The system that will be discussed here is very simple. What we will do here is build an application in WPF that does some image processing. Then we will use the result of the image processing application to control an Arduino. In this article, we will walk through the concepts of image processing using the Afoge.NET library developed by Andrew Kirillov. Using the library, we will develop an application and then go through the same old story of controlling an Arduino by a WPF application. However, the same technique can be applied to a Windows Forms application as well. Here we have used Aforge.NET. We can use the RealSense SDK as well. In fact, RealSense gives more accurate results. I don't own a RealSense device so I will stick to a normal webcam and use Afotge.NET to do some image processing.
 
Our first course of action is to open Visual Studio and create a new WPF application project. Once we have done that, from the toolbar, we will add the following controls required. 
  1. A pictureBox
  2. A button for starting the image control
  3. A button for stopping the image control
  4. A button for connecting to the Arduino
  5. A button for disconnecting from the Arduino
  6. A listBox for displaying the camera devices available. 
Note: This article won't go into depth about the details of the UI design. Just a screenshot with some brief details is provided. In-depth details will make this lengthy.
 
ui design
 
Figure 1: Screenshot of the UI (MainWindow.xaml) of the app.
 
In the preceding figure, we have the components for:
  1. Displaying the picture
  2. Displaying the status for the connection
  3. Buttons for activating the app and connecting to the Arduino
  4. Buttons 
Approach
 
Our target is to develop an application that will detect motions and if motion is detected, then a signal will be sent to the Arduino that will light up a LED. We will use the Aforge.NET library to detect the motion and then use the Serial class to communicate with the Arduino. Since I am dealing with motion detection, I have used some tutorials by Andrew Kirillov in codeproject.org. All links are mentioned below in the "Further studies" section.
 
Our second step is to code for MainPage.xaml.cs. Before moving on to the main code, you need to add the references for using Aforge.NET. I have used the Nuget service manager to add to the library. Add these:
  1. Aforge.Imaging
  2. Aforge.Math
  3. Aforge.Video
  4. Aforge.Video.DirectShow
  5. Aforge.Vision
Now, after adding these, include the following in your namespaces:
  1. using System;    
  2. using System.Collections.Generic;    
  3. using System.Linq;    
  4. using System.ComponentModel;    
  5. using System.Text;    
  6. using System.Drawing;    
  7. using System.Threading.Tasks;    
  8. using System.Windows;    
  9. using System.Windows.Controls;    
  10. using System.Windows.Data;    
  11. using System.Windows.Documents;    
  12. using System.Windows.Input;    
  13. using System.Windows.Media;    
  14. using System.Windows.Media.Imaging;    
  15. using System.Windows.Navigation;    
  16. using System.Windows.Shapes;    
  17. using AForge.Video;    
  18. using AForge;    
  19. using AForge.Math.Geometry;    
  20. using AForge.Video.DirectShow;    
  21. using AForge.Imaging;    
  22. using AForge.Imaging.Filters;    
  23. using System.IO;    
  24. using System.Drawing.Imaging;    
  25. using System.Threading;    
  26. using System.IO.Ports;   
Note: Do not forget to create the event handlers of the components in MainPage.xaml.
 
Now add the Serial Port class. 
  1. SerialPort sp = new SerialPort();  
Initialize the global variables. 
  1. int height, width;    
  2. //Initialize the background frame to null    
  3. Bitmap backgroundframe = null;    
  4. private FilterInfoCollection videoDevices;    
  5. private VideoCaptureDevice videoSource = null;    
Create a method and name it getCamList() that will be used for getting the camera list available in our device. The cam list will be displayed in the combobox. 
  1. private void getCamList()    
  2. {    
  3.     
  4.    videoDevices = new FilterInfoCollection(FilterCategory.VideoInputDevice);    
  5.    foreach (FilterInfo device in videoDevices)    
  6.    {    
  7.        //cambox is the name of the combobox used in the UI    
  8.        cambox.Items.Add(device.Name);    
  9.    }    
  10.    cambox.SelectedIndex = 0; //make dafault to first cam    
  11. }   
In the preceding method, we added the available cameras to our combobox and then by default we have selected index 0 since here I have assumed that the 0 index device is your primary device. Now go to MainPage.xaml and create the window loaded event. In the event handler method, we will call the preceding method.
  1. private void Window_Loaded(object sender, RoutedEventArgs e)    
  2. {    
  3.      getCamList();    
  4. }    
Now we will write code for the click event for the Start button.
  1. private void start_Click(object sender, RoutedEventArgs e)    
  2. {    
  3.       stop.IsEnabled = true;    
  4.       start.IsEnabled = false;    
  5.       videoSource = new VideoCaptureDevice(videoDevices[cambox.SelectedIndex].MonikerString);    
  6.       videoSource.NewFrame += new NewFrameEventHandler(video_NewFrame);    
  7.       //videoSource.DesiredFrameRate = 10;    
  8.       videoSource.Start();    
  9. }    
In the preceding method, most of the work is starightforward and we have created a new event for getting a new frame. Also we started the video source and in our further processing part, we will use method videoNewFrame. Now browse to the video newframe method and add these lines initially.
  1. //Input image    
  2. Bitmap IPimage = (Bitmap)eventArgs.Frame.Clone();    
  3. //Intialize the current frame    
  4. Bitmap currentframe = IPimage;     
In the preceding code, we created a Bitmap named IPimage that is a clone of the input frame. Then we took a new Bitmap currentframe and then initialized it to the input frame. Our next job is to create the filters that will be applied to the input image.
  1. Grayscale filter = new Grayscale(0.2125, 0.7154, 0.0721);    
  2. //Create difference filter    
  3. Difference differenceFilter = new Difference();    
  4. //create threshold filter    
  5. IFilter thresholdFilter = new Threshold(15);    
In the preceding lines, we created the Grayscale filter, the difference filter and the threshold filter. These filters will be used for the motion detection. The approach is to have two frames. The first is the background frame and the next will be the current frame. By using the difference and the threshold filter, we get the newly added areas that signify motion.
 
Now for comparing the current frame with the background frame, write this code.
  1. int framesize;      
  2. width = IPimage.Width;      
  3. height = IPimage.Height;      
  4. //Calculate the frame size      
  5. framesize = width * height;      
  6. if (backgroundframe == null)      
  7. {      
  8.     BitmapData bitmapData = IPimage.LockBits(      
  9.     new System.Drawing.Rectangle(0, 0, width, height),      
  10.     ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format24bppRgb);      
  11.     // apply grayscale filter getting unmanaged image      
  12.     backgroundframe = filter.Apply(new UnmanagedImage(bitmapData)).ToManagedImage();      
  13.     // unlock source image      
  14.     IPimage.UnlockBits(bitmapData);      
  15. }      
  16. currentframe = filter.Apply(currentframe);      
  17. differenceFilter.OverlayImage = backgroundframe;      
  18. // apply the filters      
  19. Bitmap tmp1 = differenceFilter.Apply(currentframe);      
  20. Bitmap tmp2 = thresholdFilter.Apply(tmp1);    
In the preceding code, we first got the background frame and then locked the source image. After getting the background frame we unlock the source frame and then initialize the current frame to the input frame. Then the difference filter and the threshold filter is applied to get our job done.
 
Noise is a factor when we deal with sensors. Similarly, we need filters to reduce noise to an optimum level. Here the erosion filter is used to do that.
  1. IFilter erosionFilter = new Erosion();     
  2. // apply the filter     
  3. Bitmap tmp3 = erosionFilter.Apply(tmp2);    
Now we have a detector ready but it's of no use unless we highlight them. Let's include this BlobCounter class for our job.
  1.      BlobCounter blobCounter = new BlobCounter();      
  2.     //get object rectangles      
  3.     blobCounter.ProcessImage(tmp2);      
  4.     System.Drawing.Rectangle[] rects = blobCounter.GetObjectsRectangles();      
  5.     // create graphics object from initial image      
  6.     Graphics g = Graphics.FromImage(IPimage);      
  7.     // draw each rectangle      
  8.     using (System.Drawing.Pen pen = new System.Drawing.Pen(System.Drawing.Color.Red, 1))      
  9.     {      
  10.         foreach (System.Drawing.Rectangle rc in rects)      
  11.         {      
  12.                   
  13.   
  14.             if ((rc.Width > 50) && (rc.Height > 50) && (rc.Width < 200) && (rc.Height < 200))      
  15.   
  16.             {      
  17.                 g.DrawRectangle(pen, rc);    
  18.                 //Send a serial character to the Arduino for some alarm or for some operation      
  19.                 sp.Write("1");      
  20.             }      
  21.         }      
  22.     }      
  23.     g.Dispose();      
  24.     MemoryStream ms = new MemoryStream();      
  25.     IPimage.Save(ms, ImageFormat.Bmp);      
  26.     ms.Seek(0, SeekOrigin.Begin);      
  27.     BitmapImage bi = new BitmapImage();      
  28.     bi.BeginInit();      
  29.     bi.StreamSource = ms;      
  30.     bi.EndInit();      
  31.   
  32.     bi.Freeze();      
  33.     Dispatcher.BeginInvoke(new ThreadStart(delegate      
  34.     {      
  35.         imbox.Source = bi;      
  36.     }));      
  37. }  
By using the Blob counter class, we get the motioned area's length and breadth of the number of motions and so on. In the preceding code, I have included a condition that if the width and height lie between 50 and 200, we will highlight that area and also at the same time send a serial character to the Arduino. That's it, your application for detection of motion is ready and just plug in your Arduino and connect some jumpers to see the magic.
 
The rest of the code is simple. Just add the code for connecting to the Arduino using the eventhandler methods for the buttons. Now if you are not able to understand the connection method, just have a look at my previous article where I described the development of a WPF application for an Arduino control.
  1. private void stop_Click(object sender, RoutedEventArgs e)      
  2. {      
  3.     videoSource.Stop();      
  4. }      
  5.   
  6. private void Connect_Click(object sender, RoutedEventArgs e)      
  7. {      
  8.     try      
  9.     {      
  10.         String portName = comno.Text;      
  11.         sp.PortName = portName;      
  12.         sp.BaudRate = 9600;      
  13.         sp.Open();      
  14.         //s1.Text = "Connected";      
  15.     }      
  16.     catch (Exception)      
  17.     {      
  18.   
  19.         MessageBox.Show("Please give a valid port number or check your connection");      
  20.     }      
  21. }      
  22.   
  23. private void Disconnect_Click(object sender, RoutedEventArgs e)      
  24. {      
  25.     sp.Close();      
  26. }    
Congratsulations! Your application is ready.
 
I have included the project with this article. You may download it and use it, no issues with that. Let's move to the Arduino part. Well, in my previous articles, I already wrote about serial communication in Arduino. Have a look at my previous article. The circuitry for the Arduino is simple. You just need to have a LED attached to pin 13 (the long lead of the LED is connected to pin 13 and the shorter one to the Gnd pin on the Arduino) of the Arduino. Plugin your Arduino with the USB and burn in the following code.
  1. void setup() {    
  2.   // put your setup code here, to run once:    
  3.   pinMode(13, OUTPUT);    
  4.   Serial.begin(9600);    
  5.     
  6. }    
  7.     
  8. void loop() {    
  9.   // put your main code here, to run repeatedly:    
  10.  char c= Serial.read();    
  11.  if(c=='1')    
  12.   digitalWrite(13,HIGH);    
  13. else    
  14.   digitalWrite(13,LOW);    
  15. }  
In the preceding Arduino code, we have read a character using the serial port and if it is equal to '1', then we just change pin 13 to high that switches on a LED.
 
The image below provides an idea of the output to be displayed. The Red boxes are the blobs and using the BlobCounter class, it is possible to get the number of blobs, the size, and the height.
 
 
Further study
 
 

Summary

 
In this article, we saw how image processing can be used in a WPF application and how to combine image processing with an Arduino and develop a very basic version of a motion detector. Now, you can develop other computer vision applications with Arduino and also implement it in robotics. In case of any problems, comment below or contact me by e-mail.