Cognitive Services In Xamarin.Forms

Before jumping into the Cognitive Services, we first have to get an idea of what the Cognitive Services do in real life. Thus, let us take a real life example of a blind person. Sakib Sheikh is blind in both eyes and he lost his vision at the age of seven. However, thanks to Microsoft Cognitive Services, as he still feels emotions as well as pictures of everything by using Microsoft Cognitive Services. Today, we are talking about the world's best Cognitive Services from Microsoft and later we will implement this in Xamarin Forms, so every single person can enjoy Cognitive Services.

Let’s get started. The very first thing you need to know is RESTful API and then you simply switch to https://www.microsoft.com/cognitive-services, where we have couple of RESTful API’s .

In short, we will consume the rest API’s in Xamarin Forms by using the Vision NuGet Package. Thus, let’s switch to Visual Studio 2015 or Xamarin Studio and create a new project, add few XAML lines to your XAML file i.e. 

  1. <StackLayout Padding="50">  
  2.     <Button x:Name="BtnImage" Text="Pick Image" Clicked="BtnImage_OnClicked" />  
  3.     <Image x:Name="Img" WidthRequest="200" HeightRequest="200"/>  
  4.     <Label x:Name="LblResult" FontSize="32"  />  
  5.   </StackLayout>   

Now, import a NuGet Package Microsoft.ProjectOxford.Vision

Afterwards, add one more package for Camera i.e. Xam.Plugin.Media.

Now, just initialize the Camera and add few lines of code, so you can access the camera of the mobile and the desktop devices. 

  1. var media = Plugin.Media.CrossMedia.Current;  
  2. await media.Initialize();  
  3. var file = await media.TakePhotoAsync(new StoreCameraMediaOptions { SaveToAlbum = false });  
  4. Img.Source = ImageSource.FromStream(() => file.GetStream());   

It’s time to call a vision client just like HttpClient.

Therefore, what I am going to do is, I will simply add a few lines of code here to access the Cognitive Services calling and feedback response and then populate the actual result into the Label. 

  1. var visionclient = new VisionServiceClient(visionKey);  
  2. var result = await visionclient.DescribeAsync(file.GetStream());  
  3. LblResult.Text = result.Description.Captions.First().Text;   

It's time to call the Dispose function to avoid memory leakage and memory flaws in your Applications by using the line given below. 

file.Dispose()

Now, all you need is just a device or an internet connection to access JSON response and call to Microsoft Cognitive Services API.

Let us see how it looks when I take a picture from the Camera and what comes via JSON response.

picture


Similar Articles