Real Time Baby Monitoring from Raspberry PI using SignalR Streaming and Cognitive Vision Service

This article explains how to do real time streaming using signalR and detects face attributes using cognitive vision service API.

Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service

SignalR Streaming is the latest addition to SignalR library and it supports sending fragments of data to clients as soon as it becomes available, instead of waiting for all the data to become available. In this article, we will build a small app for baby monitoring to stream the camera content from Raspberry PI using SignalR streaming. This tool also sends a notification to connected clients whenever it detects a baby cry using Cognitive Vision Service. 

Overview

This tool consists of the following modules.
  • SignalR Streaming Hub which holds the methods for streaming data and notification service.
  • .NET core based worker service that runs in the background thread to detect if the baby cries, by capturing photos on a frequent interval and passing it to the Cognitive Vision Service.
  • Azure based Cognitive Vision Service will take the image input and detect if any human face exists and then, it will analyze the face attributes and send the response back with face attribute values, such as - smile, sadness, anger, etc.
  • SignalR Client is a JavaScript-based Chrome extension that runs in the Google Chrome browser's background. When SignalR Hub sends the notification messages, this will show the popup notification to the user. The user will also have the option to view the live streaming from that client popup window.

                                                                                   Diagram

 Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service

YouTube Demo

Prerequisites and Dependencies

PiMonitR SignalR Hub

PiMonitRHub is streaming hub which holds two streaming methods - startstream and stopstream. When the SignalR client invokes the startstream method, it calls the camera service to capture the photo and sends it to the client by writing into the ChannelWriter. Whenever an object is written to the ChannelWriter, that object is immediately sent to the client. In the end, the ChannelWriter is completed to tell the client the stream is closed by the writer.TryComplete mMethod.

  1. public class PiMonitRHub : Hub  
  2.     {  
  3.         internal static bool _isStreamRunning = false;  
  4.         private readonly PiCameraService _piCameraService;  
  5.         public PiMonitRHub(PiCameraService piCameraService)  
  6.         {  
  7.             _piCameraService = piCameraService;  
  8.         }  
  9.   
  10.         public ChannelReader<object> StartStream(CancellationToken cancellationToken)  
  11.         {  
  12.             var channel = Channel.CreateUnbounded<object>();  
  13.             _isStreamRunning = true;  
  14.             _ = WriteItemsAsync(channel.Writer, cancellationToken);  
  15.             return channel.Reader;  
  16.         }  
  17.   
  18.         private async Task WriteItemsAsync(ChannelWriter<object> writer, CancellationToken cancellationToken)  
  19.         {  
  20.             try  
  21.             {  
  22.                 while (_isStreamRunning)  
  23.                 {  
  24.                     cancellationToken.ThrowIfCancellationRequested();  
  25.                     await writer.WriteAsync(await _piCameraService.CapturePictureAsByteArray());  
  26.                     await Task.Delay(100, cancellationToken);  
  27.                 }  
  28.             }  
  29.             catch (Exception ex)  
  30.             {  
  31.                 writer.TryComplete(ex);  
  32.             }  
  33.   
  34.             writer.TryComplete();  
  35.         }  
  36.   
  37.         public void StopStream()  
  38.         {  
  39.             _isStreamRunning = false;  
  40.             Clients.All.SendAsync("StopStream");  
  41.         }  
  42.     }  

PiMonitR Background Service

PiMonitRWorker is a worker service inheriting from the background service. It starts a new thread whenever the application is started and executes the logic inside the ExecuteAsync method at a frequent interval until the cancellationtoken is requested.

  1. internal class PiMonitRWorker : BackgroundService  
  2.     {          
  3.         private readonly IHubContext<PiMonitRHub> _piMonitRHub;  
  4.         private readonly PiCameraService _piCameraService;  
  5.         private readonly FaceClientCognitiveService _faceClientCognitiveService;  
  6.         public PiMonitRWorker(IHubContext<PiMonitRHub> piMonitRHub,  
  7.             PiCameraService piCameraService, FaceClientCognitiveService faceClientCognitiveService)  
  8.         {             
  9.             _piMonitRHub = piMonitRHub;  
  10.             _piCameraService = piCameraService;  
  11.             _faceClientCognitiveService = faceClientCognitiveService;  
  12.         }  
  13.   
  14.         protected override async Task ExecuteAsync(CancellationToken stoppingToken)  
  15.         {  
  16.             while (!stoppingToken.IsCancellationRequested)  
  17.             {                 
  18.                 if (!PiMonitRHub._isStreamRunning)  
  19.                 {  
  20.                     var stream = await _piCameraService.CapturePictureAsStream();           
  21.                     if (await _faceClientCognitiveService.IsCryingDetected(stream))  
  22.                     {  
  23.                         await _piMonitRHub.Clients.All.SendAsync("ReceiveNotification""Baby Crying Detected! You want to start streaming?");  
  24.                     }  
  25.                 }  
  26.                 //Run the background service for every 10 seconds  
  27.                 await Task.Delay(10000);  
  28.             }  
  29.         }  
  30.     }  

In this worker service, it captures the photo using camera service and sends it to the Cognitive Service API to detect the baby cry. If the baby cry is detected, the notification hub method broadcasts the notification message to all connected clients. If the client is already watching the stream, this background service will not detect the baby cry until the user stopped watching the stream, to avoid duplicate notification to the users.

Cognitive Vision Service

Microsoft Cognitive Service API is a very powerful API to provide the power of AI in a few lines of code. There are various Cognitive Service APIs available. In this app, I will be using the Cognitive Vision API to detect the face emotion to see if the baby is crying or not. This API will analyze the given photo to detect, recognize the human face, and analyze the emotion face attributes, such as smile, sadness, etc. Best of all, this service has a free tier which allows 20 calls per minute, so we can get started without paying for anything.

Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service

After you register the Cognitive Service on the Azure portal, you will get the API endpoint and the Keys from the portal.

Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service
 
You can store the Keys and EndPointURL into UserSecrets / AppSettings / Azure Key Vault so that we can access it from configuration API.
  1. public class FaceClientCognitiveService  
  2.     {  
  3.         private readonly IFaceClient faceClient;  
  4.         private readonly float scoreLimit = 0.5f;  
  5.         private readonly ILogger<FaceClientCognitiveService> _logger;  
  6.         public FaceClientCognitiveService(IConfiguration config, ILogger<FaceClientCognitiveService> logger)  
  7.         {  
  8.             _logger = logger;  
  9.             faceClient = new FaceClient(new ApiKeyServiceClientCredentials(config["SubscriptionKey"]),  
  10.                 new System.Net.Http.DelegatingHandler[] { });  
  11.             faceClient.Endpoint = config["FaceEndPointURL"];  
  12.         }  
  13.   
  14.         public async Task<bool> IsCryingDetected(Stream stream)  
  15.         {  
  16.             IList<FaceAttributeType> faceAttributes = new FaceAttributeType[]  
  17.             {  
  18.                 FaceAttributeType.Emotion  
  19.             };  
  20.             // Call the Face API.  
  21.             try  
  22.             {  
  23.                 IList<DetectedFace> faceList = await faceClient.Face.DetectWithStreamAsync(stream, falsefalse, faceAttributes);  
  24.                 if (faceList.Count > 0)  
  25.                 {  
  26.                     var face = faceList[0];  
  27.                     if (face.FaceAttributes.Emotion.Sadness >= scoreLimit ||  
  28.                         face.FaceAttributes.Emotion.Anger >= scoreLimit ||  
  29.                         face.FaceAttributes.Emotion.Fear >= scoreLimit)  
  30.                     {  
  31.                         _logger.LogInformation($"Crying Detected with the score of {face.FaceAttributes.Emotion.Sadness}");  
  32.                         return true;  
  33.                     }  
  34.                     else  
  35.                     {  
  36.                         _logger.LogInformation($"Crying Not Detected with the score of {face.FaceAttributes.Emotion.Sadness}");  
  37.                     }  
  38.                 }  
  39.                 else  
  40.                 {  
  41.                     _logger.LogInformation("No Face Detected");  
  42.                 }  
  43.             }  
  44.             catch (Exception e)  
  45.             {  
  46.                 _logger.LogError(e.Message);  
  47.             }  
  48.   
  49.             return false;  
  50.         }  
  51.     }  
  • Install the Microsoft.Azure.CognitiveServices.Vision.Face nuget package to install the FaceClient.
  • Before, making the API call, set the face attributes parameters to return only emotion attribute to avoid returning all the data. 
  • Face API has got so many face attributes for the identified face. But, for our app, we use the emotion attributes of Sadness, Anger, Fear.
  • If anyone of the above-mentioned attributes is higher than 0.5 limit, this method will return true. 
  • I came up with 0.5 as a limit for these attributes. However, you can change the value or attributes that work for your use case. I have tested with few crying images and my limit works fine for all those cases.

PiMonitR Camera Service

I am running my Raspberry PI with Raspian OS which is based on Linux ARM architecture. The camera module has a built-in command line tool called Raspistill to take the picture; however, I wanted to use some C# wrapper library to capture the picture from Pi and found out this wonderful open source project called MMALSharp which is an unofficial C# API for the Raspberry Pi camera and it supports Mono 4.x and .NET Standard 2.0.

I installed the NuGet package of MMALSharp and initiated the singleton object in the constructor so that it can be reused while streaming the continuous shots of pictures. I have also set the resolution to 640 * 480 for the picture because the default resolution is very high and file size is huge as well.

  1. public class PiCameraService  
  2.     {  
  3.         public MMALCamera MMALCamera;  
  4.         private readonly string picStoragePath = "/home/pi/images/";  
  5.         private readonly string picExtension = "jpg";  
  6.         public PiCameraService()  
  7.         {  
  8.             MMALCamera = MMALCamera.Instance;  
  9.             //Setting the Average resolution for reducing the file size  
  10.             MMALCameraConfig.StillResolution = new Resolution(640, 480);  
  11.         }  
  12.   
  13.         public async Task<byte[]> CapturePictureAsByteArray()  
  14.         {  
  15.             var fileName = await CapturePictureAndGetFileName();  
  16.   
  17.             string filePath = Path.Join(picStoragePath, $"{fileName}.{picExtension}");  
  18.             byte[] resultData = await File.ReadAllBytesAsync(filePath);  
  19.   
  20.             //Delete the captured picture from PI storage  
  21.             File.Delete(filePath);  
  22.             return resultData;  
  23.         }  
  24.   
  25.         public async Task<Stream> CapturePictureAsStream()  
  26.         {  
  27.             return new MemoryStream(await CapturePictureAsByteArray());  
  28.         }  
  29.   
  30.         private async Task<string> CapturePictureAndGetFileName()  
  31.         {  
  32.             string fileName = null;  
  33.             using (var imgCaptureHandler = new ImageStreamCaptureHandler(picStoragePath, picExtension))  
  34.             {  
  35.                 await MMALCamera.TakePicture(imgCaptureHandler, MMALEncoding.JPEG, MMALEncoding.I420);  
  36.                 fileName = imgCaptureHandler.GetFilename();  
  37.             }  
  38.             return fileName;  
  39.         }  
  40.     }  

Publish Server App to Raspberry Pi

Now that we are done with server side app coding, our next step is to deploy it into Raspberry PI. In order to publish the app into PI, there are two different ways to publish it.

  • Framework Dependent - It relies on the presence of a shared system-wide version of .NET Core on the target system.
  • Self Contained -  It doesn't rely on the presence of shared components on the target system. All components, including both the .NET Core libraries and the .NET Core runtime, are included with the application and are isolated from other .NET Core applications
I used self-containment to deploy so that all the dependencies are part of the deployment. The following publish command will generate the final output with all the dependencies. 
  1. dotnet publish -r linux-arm  

You will find the final output in the Linux-arm/publish folder under bin folder. I used Network file sharing to copy files into Raspberry Pi.

Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service
 
After all the files are copied, I connected my Raspberry Pi through a remote connection and run the app with the following command in the terminal.
 
Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service
 

PiMonitR Chrome Extension SignalR Client

I decided to go with chrome extension as my SignalR client because it supports real-time notification and also, it doesn’t need any server to host the app. In this client app, I have background script which will initialize the SignalR connection with hub and runs in the background to receive any notification from the hub. It also has a popup window which will have the start and stop streaming buttons to invoke the streaming and view the streaming output.

manifest.json

manifest.json will define the background scripts, icons, and permissions that are needed for this extension.

  1. {  
  2.   "name""Pi MonitR Client",  
  3.   "version""1.0",  
  4.   "description""Real time Streaming from Raspnerry PI using SignalR",  
  5.   "browser_action": {  
  6.     "default_popup""popup.html",  
  7.     "default_icon": {  
  8.       "16""images/16.png",  
  9.       "32""images/32.png",  
  10.       "48""images/48.png",  
  11.       "128""images/128.png"  
  12.     }  
  13.   },  
  14.   "icons": {  
  15.     "16""images/16.png",  
  16.     "32""images/32.png",  
  17.     "48""images/48.png",  
  18.     "128""images/128.png"  
  19.   },  
  20.   "permissions": [  
  21.     "tabs",  
  22.     "notifications",  
  23.     "http://*/*"  
  24.   ],  
  25.   "background": {  
  26.     "persistent"true,  
  27.     "scripts": [  
  28.       "signalr.js","background.js"  
  29.     ]  
  30.   },  
  31.   "manifest_version": 2,  
  32.   "web_accessible_resources": [  
  33.     "images/*.png"      
  34.   ]  
  35. }  

background.js

  1. // The following sample code uses modern ECMAScript 6 features   
  2. // that aren't supported in Internet Explorer 11.  
  3. // To convert the sample for environments that do not support ECMAScript 6,   
  4. // such as Internet Explorer 11, use a transpiler such as   
  5. // Babel at http://babeljs.io/.  
  6. var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) {  
  7.     return new (P || (P = Promise))(function (resolve, reject) {  
  8.         function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }  
  9.         function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }  
  10.         function step(result) { result.done ? resolve(result.value) : new P(function (resolve) { resolve(result.value); }).then(fulfilled, rejected); }  
  11.         step((generator = generator.apply(thisArg, _arguments || [])).next());  
  12.     });  
  13. };  
  14.   
  15. const hubUrl = "http://pi:5000/hubs/piMonitR"  
  16.   
  17. var connection = new signalR.HubConnectionBuilder()  
  18.     .withUrl(hubUrl, { logger: signalR.LogLevel.Information })  
  19.     .build();  
  20.   
  21. // We need an async function in order to use await, but we want this code to run immediately,  
  22. // so we use an "immediately-executed async function"  
  23. (() => __awaiter(thisvoid 0, void 0, function* () {  
  24.     try {  
  25.         yield connection.start();  
  26.     }  
  27.     catch (e) {  
  28.         console.error(e.toString());  
  29.     }  
  30. }))();  
  31.   
  32. connection.on("ReceiveNotification", (message) => {  
  33.     new Notification(message, {  
  34.         icon: '48.png',  
  35.         body: message  
  36.     });  
  37. });  
  38.   
  39. chrome.runtime.onConnect.addListener(function (externalPort) {  
  40.     externalPort.onDisconnect.addListener(function () {  
  41.         connection.invoke("StopStream").catch(err => console.error(err.toString()));  
  42.     });  
  43. });  

background.js will initiate the SignalR connection with hub with the defined URL. We also need signalr.js in the same folder. In order to get the signalr.js file, we need to install SignalR npm package and copy the signalr.js file from node_modules\@aspnet\signalr\dist\browser folder.

npm install @aspnet/signalr

This background script will keep our SignalR client active and when it receives the notification from hub, it will show a Chrome notification like below.

 Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service

popup.html
  1. <!doctype html>  
  2. <html>  
  3.   
  4. <head>  
  5.     <title>Pi MonitR Dashboard</title>  
  6.     <script src="popup.js" type="text/javascript"></script>  
  7. </head>  
  8.   
  9. <body>  
  10.     <h1>Pi MonitR - Stream Dashboard</h1>      
  11.     <div>  
  12.         <input type="button" id="streamStartButton" value="Start Streaming" />  
  13.         <input type="button" id="streamStopButton" value="Stop Streaming" disabled />  
  14.     </div>  
  15.     <ul id="logContent"></ul>  
  16.     <img id="streamContent" width="700" height="400" src="" />      
  17.    </body>  
  18. </html>  

popup html will show the stream content when the "Start streaming" button is clicked. It will complete the stream when the "Stop streaming" button is clicked.

popup.js
  1. var __awaiter = chrome.extension.getBackgroundPage().__awaiter;  
  2. var connection = chrome.extension.getBackgroundPage().connection;  
  3.   
  4. document.addEventListener('DOMContentLoaded'function () {  
  5.     const streamStartButton = document.getElementById('streamStartButton');  
  6.     const streamStopButton = document.getElementById('streamStopButton');  
  7.     const streamContent = document.getElementById('streamContent');  
  8.     const logContent = document.getElementById('logContent');  
  9.   
  10.     streamStartButton.addEventListener("click", (event) => __awaiter(thisvoid 0, void 0, function* () {  
  11.         streamStartButton.setAttribute("disabled""disabled");  
  12.         streamStopButton.removeAttribute("disabled");  
  13.         try {  
  14.             connection.stream("StartStream")  
  15.                 .subscribe({  
  16.                     next: (item) => {                        
  17.                         streamContent.src = "data:image/jpg;base64," + item;                         
  18.                     },  
  19.                     complete: () => {  
  20.                         var li = document.createElement("li");  
  21.                         li.textContent = "Stream completed";  
  22.                         logContent.appendChild(li);  
  23.                     },  
  24.                     error: (err) => {  
  25.                         var li = document.createElement("li");  
  26.                         li.textContent = err;  
  27.                         logContent.appendChild(li);  
  28.                     },  
  29.                 });  
  30.         }  
  31.         catch (e) {  
  32.             console.error(e.toString());  
  33.         }  
  34.         event.preventDefault();  
  35.     }));  
  36.   
  37.     streamStopButton.addEventListener("click"function () {  
  38.         streamStopButton.setAttribute("disabled""disabled");  
  39.         streamStartButton.removeAttribute("disabled");  
  40.         connection.invoke("StopStream").catch(err => console.error(err.toString()));  
  41.         event.preventDefault();  
  42.     });  
  43.   
  44.     connection.on("StopStream", () => {  
  45.         var li = document.createElement("li");  
  46.         li.textContent = "stream closed";  
  47.         logContent.appendChild(li);          
  48.         streamStopButton.setAttribute("disabled""disabled");  
  49.         streamStartButton.removeAttribute("disabled");  
  50.     });  
  51. });  

When the user clicks the start streaming button, it will invoke the stream hub method (StartStream) and subscribe to it. Whenever hub sends the data, it receives the content and sets that value directly to the Image src attribute.

streamContent.src = "data:image/jpg;base64," + item;

When the user clicks the stop streaming button, client invokes the StopStream hub method which will set the _isStreamRunning Property to false which will complete the stream.

Real Time Baby Monitor Chrome Extension - Streaming from Raspberry PI Using SignalR And Cognitive Vision Service
 

Conclusion

This is a fun project I wanted to experiment with SignalR streaming. Well, it worked as I expected. Soon, we are going to have a lot more new stuff coming in SignalR (IAsyncEnumerable) which will make even better for many other real-time scenarios. I have uploaded the source code in my GitHub repository.

Happy Coding!!!