Android Application Interface With Azure Computer Vision API

Introduction

 
This article demonstrates the Android application interface with Azure computer vision API. It demonstrates image analysis.
 

What is Computer Vision API?

 
Computer vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image. It is also called cognitive. Computer Vision algorithms can analyze the content of an image in different ways, Computer Vision can find all the faces in an image.
 
Example
 
Android Application Interface with Azure ComputerVision API
 
Step 1
 
Create a new project in Android Studio from File >> Project and fill in all the necessary details. Next, go to Gradle Scripts >> build.gradle (Module: app).S elect build.gradle. The app Gradle compiles the code, and then build types will appear. Just replace that with the following code.
 
Usage
 
Make sure you've added maven central to the list.
  1. allprojects {  
  2.     repositories {  
  3.          
  4.         maven { url 'https://jitpack.io' }  
  5.     }  
  6. }  
App gradle compile code 
  1. dependencies {  
  2.       
  3.     implementation 'com.github.eddydn:EDMTDevCognitiveVision:1.3'  
  4. }  
Change the SdkVersion is minimum level 21.
  1. defaultConfig {  
  2.         applicationId "io.github.saravanan_selvaraju.azurevision"  
  3.         minSdkVersion 21    // Minimum Level 21 SDK Version  
  4.         targetSdkVersion 28  
  5.         versionCode 1  
  6.         versionName "1.0"  
  7.         testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"  
  8.     }   
Step 2
 
Next, go to app >> res >> drawable, select the drawable directory then paste the given image.
 
Android Application Interface with Azure ComputerVision API
 
Next, go to app >> res >> layout >> activity_main.xml. Select the activity_main xml file then replacing the following code.
  1. <?xml version="1.0" encoding="utf-8"?>  
  2. <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"  
  3.     xmlns:app="http://schemas.android.com/apk/res-auto"  
  4.     xmlns:tools="http://schemas.android.com/tools"  
  5.     android:layout_width="match_parent"  
  6.     android:layout_height="match_parent"  
  7.     android:padding="20dp"  
  8.     tools:context=".MainActivity">  
  9.   
  10.     <ImageView  
  11.         android:id="@+id/image_view"  
  12.         android:layout_width="match_parent"  
  13.         android:layout_height="match_parent" />  
  14.       
  15.     <Button  
  16.         android:id="@+id/btn_process"  
  17.         android:text="Analyze"  
  18.         android:layout_alignParentBottom="true"  
  19.         android:layout_width="match_parent"  
  20.         android:layout_height="wrap_content" />  
  21.     <TextView  
  22.         android:id="@+id/txt_result"  
  23.         android:layout_above="@+id/btn_process"  
  24.         android:text="Description..."  
  25.         android:textAlignment="center"  
  26.         android:layout_width="wrap_content"  
  27.         android:layout_height="wrap_content" />  
  28.   
  29. </RelativeLayout>  
Preview
Android Application Interface with Azure ComputerVision API
 
Step 3
 
Next, go to the Azure portal (https://portal.azure.com) create resource >> AI + Machine Learning >> Computer Vision. Select the computer vision and fill in all the necessary details then click the create button.
 
Android Application Interface with Azure ComputerVision API 
 
After that get the computer vision API_KEY and API_LINK then click the key hyperlink and keys 1 and 2 appear.
 
Android Application Interface with Azure ComputerVision API 
 
Copy the anyone Azure_Vision key (Key 1 or Key 2).
 
Android Application Interface with Azure ComputerVision API 
 
Step 4
 
Next, go to app >>Java >> package name. Select MainActivity.java. The Java code will appear.
 
Android Application Interface with Azure ComputerVision API 
 
When an asynchronous task is executed, the task goes through 4 steps,
  • onPreExecute(), invoked on the UI thread before the task is executed. This step is normally used to set up the task, for instance by showing a progress bar in the user interface.
  • doInBackground(Params...), invoked on the background thread immediately after onPreExecute() finishes executing. This step is used to perform background computation that can take a long time. The parameters of the asynchronous task are passed to this step. The result of the computation must be returned by this step and will be passed back to the last step. This step can also use publishProgress(Progress...) to publish one or more units of progress. These values are published on the UI thread, in the onProgressUpdate(Progress...) step.
  • onProgressUpdate(Progress...), invoked on the UI thread after a call to publishProgress(Progress...). The timing of the execution is undefined. This method is used to display any form of progress in the user interface while the background computation is still executing. For instance, it can be used to animate a progress bar or show logs in a text field.
  • onPostExecute(Result), invoked on the UI thread after the background computation finishes. The result of the background computation is passed to this step as a parameter.
     
    Android Application Interface with Azure ComputerVision API
MainActivity.java
  1. package io.github.saravanan_selvaraju.azurevision;  
  2.   
  3. import android.app.ProgressDialog;  
  4. import android.graphics.Bitmap;  
  5. import android.graphics.BitmapFactory;  
  6. import android.os.AsyncTask;  
  7. import android.support.v7.app.AppCompatActivity;  
  8. import android.os.Bundle;  
  9. import android.text.TextUtils;  
  10. import android.view.View;  
  11. import android.widget.Button;  
  12. import android.widget.ImageView;  
  13. import android.widget.TextView;  
  14. import android.widget.Toast;  
  15.   
  16. import com.google.gson.Gson;  
  17.   
  18. import org.w3c.dom.Text;  
  19.   
  20. import java.io.ByteArrayInputStream;  
  21. import java.io.ByteArrayOutputStream;  
  22. import java.io.IOException;  
  23. import java.io.InputStream;  
  24.   
  25. import edmt.dev.edmtdevcognitivevision.Contract.AnalysisInDomainResult;  
  26. import edmt.dev.edmtdevcognitivevision.Contract.AnalysisResult;  
  27. import edmt.dev.edmtdevcognitivevision.Contract.Caption;  
  28. import edmt.dev.edmtdevcognitivevision.Rest.VisionServiceException;  
  29. import edmt.dev.edmtdevcognitivevision.VisionServiceClient;  
  30. import edmt.dev.edmtdevcognitivevision.VisionServiceRestClient;  
  31.   
  32. public class MainActivity extends AppCompatActivity {  
  33.   
  34.     ImageView imageView;  
  35.     Button btnProcess;  
  36.     TextView txtResult;  
  37.   
  38.     private final String API_KEY = "b3a1dd91af344f07b2a318507d93c9dc";  
  39.     private final String API_LINK = "https://eastus.api.cognitive.microsoft.com/vision/v1.0";  
  40.   
  41.     VisionServiceClient visionServiceClient = new VisionServiceRestClient(API_KEY,API_LINK);  
  42.   
  43.     @Override  
  44.     protected void onCreate(Bundle savedInstanceState) {  
  45.         super.onCreate(savedInstanceState);  
  46.         setContentView(R.layout.activity_main);  
  47.   
  48.         imageView = (ImageView)findViewById(R.id.image_view);  
  49.         btnProcess = (Button) findViewById(R.id.btn_process);  
  50.         txtResult = (TextView)findViewById(R.id.txt_result);  
  51.   
  52.         final Bitmap bitmap = BitmapFactory.decodeResource(getResources(),R.drawable.smile);  
  53.         imageView.setImageBitmap(bitmap);  
  54.   
  55.         btnProcess.setOnClickListener(new View.OnClickListener() {  
  56.             @Override  
  57.             public void onClick(View view) {  
  58.                 ByteArrayOutputStream outputStream = new ByteArrayOutputStream();  
  59.                 bitmap.compress(Bitmap.CompressFormat.JPEG, 100,outputStream);  
  60.                 final ByteArrayInputStream inputStream = new ByteArrayInputStream(outputStream.toByteArray());  
  61.   
  62.                 AsyncTask<InputStream,String,String> visionTask = new AsyncTask<InputStream, String, String>() {  
  63.                   ProgressDialog progressDialog = new ProgressDialog(MainActivity.this);  
  64.   
  65.                     @Override  
  66.                     protected void onPreExecute() {  
  67.                         progressDialog.show();  
  68.                     }  
  69.   
  70.                     @Override  
  71.                     protected String doInBackground(InputStream... inputStreams) {  
  72.                         try  
  73.                         {  
  74.                                 publishProgress("Reconizing...");  
  75.                                 String[] features = {"Description"};  
  76.                                 String[] details = {};  
  77.   
  78.                             AnalysisResult result = visionServiceClient.analyzeImage(inputStreams[0],features,details);  
  79.   
  80.                             String jsonResult = new Gson().toJson(result);  
  81.                             return jsonResult;  
  82.                         } catch (IOException e) {  
  83.                             e.printStackTrace();  
  84.                         } catch (VisionServiceException e) {  
  85.                             e.printStackTrace();  
  86.                         }  
  87.                         return "";  
  88.                     }  
  89.   
  90.                     @Override  
  91.                     protected void onPostExecute(String s){  
  92.   
  93.                         if(TextUtils.isEmpty(s)){  
  94.   
  95.                            Toast.makeText(MainActivity.this,"API Return Empty Result",Toast.LENGTH_SHORT).show();  
  96.                         }  
  97.                         else {  
  98.                             progressDialog.dismiss();  
  99.                             AnalysisResult result = new Gson().fromJson(s, AnalysisResult.class);  
  100.                             StringBuilder result_Text = new StringBuilder();  
  101.                             for (Caption caption : result.description.captions)  
  102.                                 result_Text.append(caption.text);  
  103.                             txtResult.setText(result_Text.toString());  
  104.                         }  
  105.   
  106.                     }  
  107.   
  108.                     @Override  
  109.                     protected void onProgressUpdate(String... values){  
  110.                         progressDialog.setMessage(values[0]);  
  111.   
  112.                     }  
  113.   
  114.                 };  
  115.                 visionTask.execute(inputStream);  
  116.             }  
  117.         });  
  118.   
  119.     }  
  120. }  
Step 5
 
Next, go to app >> manifests >> AndroidManifest.XML. Add internet permission to the AndroidManifest.XML file.
  1. <uses-permission android:name="android.permission.INTERNET"></uses-permission>  
  2.   
  3.   <application  
  4.         android:usesCleartextTraffic="true"   //clear textTraffic is must
  5.   </application>  
Step 6
 
Next, go to Android Studio and deploy the application. Select Emulator or your Android Device with USB debugging enabled. Give it a few seconds to make installations and set permissions.
 
Run the application in your desired emulator (Shift + F10).
 
Android Application Interface with Azure ComputerVision API
 
When I click the analyze button to get the output for "a dinosaur with its mouth open"
 
Finally, we have successfully created a Computer Vision Android application. Later we will discuss more Android applications. 


Similar Articles