Using Azure Cognitive Service Computer Vision AI To Read Text From An Image

Introduction

 
The Microsoft Azure has a lot to offer when it comes to Cognitive Services, trying them out is really fun. Here in this article, we are going to see how we can detect and read texts from an image and show it in the display using the Cognitive Service Computer Vision AI. The service Computer Vision has its own machine language running which helps us to do our tasks in an easy and effective manner. It uses the technology Optical Character Recognition and extracts the words or text to a readable format. All we have to do is,use the service and write some custom codes as per our requirement. Sounds good? If yes, let’s just skip the introduction and jump on to the real fun. You can always read this article on my blog here.
 

Background

 
I love quotes, and I strongly believe that a single quote can make an impact in your life sometimes. One of my hobbies is collecting the images which contain quotes in it. Now, with the help of Azure Computer Vision cognitive service, I thought of creating an application which can get the quotes from the image and save it. You can easily think of many other usages of this service, below are some of them.
  1. Using the translator, which scans the image, reads the text and translates it
  2. Document creation from the images, which will be useful to save time
  3. A Google Glass, which capture what you see and translates the contents to your native language
  4. A digital medical prescription from the actual one
Imagine that you need to create a solution which can save all of your medical prescription contents in a table in a database, so that you can easily sort/filter and show them in a UI. You can even create a mobile application to do that. The possibilities are endless. Now let’s build one of them.
 
Prerequisites
 
Before we start, we should have the following things on hand.
  1. A valid Azure Subscription
  2. A good Code editor, I prefer VSCode or Visual Studio
Source Code
 
The source code be downloaded from here.
 
Let us start.
 
As I mentioned earlier, we will be using an Azure service to build our application. Azure is a cloud computing service created by Microsoft. For the last few years there has been  the Computer Vision Resource in Azure
 
To create the Azure resource, login to the Azure portal and click on the +Create a resource menu. Once the window loads, you can search for the resource Computer Vision as shown in the preceding image.
 
Using Azure Cognitive Service Computer Vision AI To Read Text From An Image 
Computer Vision Azure Resource
 
Now, click on the create button and fill  in the form.
 
Using Azure Cognitive Service Computer Vision AI To Read Text From An Image
Create Resource
 
It may take a few minutes to finish the development of the resource. Once it is finished, we can go to the resource and get our keys from the Keys section under Resource Management. Save those keys somewhere as we need them while we create our applications.
 

Develop the Angular Client Application

 
Here, we will be creating an Angular application with the help of Angular CLI and to create the application we are going to use the command ng new Azure-AI-Image-Text-Reader where the last parameter is our application name. You should also be given a chance to select your CSS and routing option. Please note that you can add them later as well.
 
Once the application is ready, we can create our own Angular service.
 
detection.service.ts
 
To create the service, let’s use the preceding command.
 
ng g service detection 
 
Here the name ‘detection’ is our service name. Now we can edit the service code as preceding.
  1. import {  
  2.     Injectable  
  3. } from '@angular/core';  
  4. import {  
  5.     HttpClient,  
  6.     HttpHeaders,  
  7.     HttpResponse  
  8. } from '@angular/common/http';  
  9. import {  
  10.     config  
  11. } from './config';  
  12. @Injectable({  
  13.     providedIn: 'root'  
  14. })  
  15. export class DetectionService {  
  16.     constructor(private http: HttpClient) {}  
  17.     async post(url: string): Promise < any > {  
  18.         const body = {  
  19.             url: url  
  20.         };  
  21.         const options = {  
  22.             headers: new HttpHeaders({  
  23.                 'Content-Type''application/json',  
  24.                 'Ocp-Apim-Subscription-Key': config.api.apiKey  
  25.             })  
  26.         };  
  27.         return await this.http.post(config.api.baseUrl, body, options).toPromise();  
  28.     }  
  29. }  
As you can see that we are creating an HttpClient call to our baseUrl which is been configured in our config.ts file. To call the Computer Vision AI service, it is mandatory to pass the subscription key, the one we have created in the Azure portal, to the request by using the header Ocp-Apim-Subscription-Key.
 
config.ts
  1. const config = {  
  2.     api: {  
  3.         baseUrl: 'https://westeurope.api.cognitive.microsoft.com/vision/v2.0/ocr',  
  4.         apiKey: ''  
  5.     }  
  6. };  
  7. export {  
  8.     config  
  9. };  
Please remember to change the baseUrl and apiKey before you run the application.
 
home.component.ts
 
As our service is ready, let’s create our component now using the command below.
 
ng g chome 
 
Once the component is created, let’s go to the file home.component.ts and edit our code.
  1. import {  
  2.     Component  
  3. } from '@angular/core';  
  4. import {  
  5.     DetectionService  
  6. } from '../detection.service';  
  7. import {  
  8.     FormControl,  
  9.     Validators  
  10. } from '@angular/forms';  
  11. @Component({  
  12.     selector: 'app-home',  
  13.     templateUrl: './home.component.html',  
  14.     styleUrls: ['./home.component.scss']  
  15. })  
  16. export class HomeComponent {  
  17.     lines: string;  
  18.     stringResult: string;  
  19.     submitted = false;  
  20.     constructor(private detectionService: DetectionService) {}  
  21.     public myreg = /(^|\s)((https?:\/\/)?[\w-]+(\.[\w-]+)+\.?(:\d+)?(\/\S*)?)/gi;  
  22.     url = new FormControl('', [Validators.required, Validators.pattern(this.myreg)]);  
  23.     markTouched() {  
  24.         this.url.markAsTouched();  
  25.         this.url.updateValueAndValidity();  
  26.     }  
  27.     async submit() {  
  28.         if (this.url.value) {  
  29.             this.submitted = true;  
  30.             const result: any = await this.detectionService.post(this.url.value);  
  31.             if (result) {  
  32.                 const resultArray: Array < string > = [];  
  33.                 this.lines = result.regions[0].lines.forEach((obj: any) => {  
  34.                     obj.words.forEach((word: any) => {  
  35.                         resultArray.push(word.text);  
  36.                     });  
  37.                 });  
  38.                 this.stringResult = resultArray.join(' ');  
  39.             }  
  40.         }  
  41.     }  
  42. }  
Once the component is loaded in the screen, the user will be able to type or paste the URL of an image and submit the same. Before submitting the request, we are validating the input using the FormControl, we are using the required validator and pattern validator which is used to find whether the given url is valid or not. Once the request has been made we are calling our detection service with a post request and once we get the result from the Azure service, we are looping through the result and formatting the data accordingly.
 
home.component.html
 
As we have our component logic ready in hand, now it is time to edit our component template. Let’s edit our home.component.html file as follows.
  1. <mat-card class="searchbox">  
  2.     <mat-form-field>  
  3.         <input matInput placeholder="Enter your url" [formControl]="url" required>  
  4.             <mat-error *ngIf="url.hasError('required')">  
  5.     Please enter a Url  
  6.     </mat-error>  
  7.             <mat-error *ngIf="url.hasError('pattern')">  
  8.     Given Url is Invalid  
  9.     </mat-error>  
  10.         </mat-form-field>  
  11.         <button [disabled]="url.invalid" (click)="submit()" mat-button color="primary">Submit</button>  
  12.     </mat-card>  
  13.     <mat-card>  
  14.     {{stringResult | json}}  
  15.     </mat-card>  
  16.     <mat-card *ngIf="submitted">  
  17.         <img mat-card-image [src]="url.value" alt="Given photo">  
  18. </mat-card>  
We use Angular Material design to develop our Angular application, the material component mat-card is a card component which can act as a container, apart from the mat-card we are using the components mat-form-field, matInput, mat-error, mat-card-image in our home component. You can use the command npm install –save @angular/material @angular/cdk @angular/animations to install the material in your application.
 
Once the component loads, we are giving an option to submit a valid image URL, if the URL provided is not valid, we are disabling the submit button. Once the user submits the request, then we are calling our detection service. Sounds good?
 

Output

 
Below is the screen capture of our application.
 
Using Azure Cognitive Service Computer Vision AI To Read Text From An Image
 
Now we can test this application by giving the URL of the below image. 
Using Azure Cognitive Service Computer Vision AI To Read Text From An Image
Once you submit the URL, you should be able to see an output as below.

Using Azure Cognitive Service Computer Vision AI To Read Text From An Image
Demo
 
The application demo can be viewed here.
 

Conclusion

 
Wow! Now we have learned what  Azure Computer Vision AI is and how to create Azure Computer Vision Cognitive Service. We also learned to create an Angular application which reads the text from images. I hope you found this article useful. I can’t wait to see what you are going to build with Azure Computer Vision AI. You can also check out the other options we have in Microsoft Azure Computer Vision AI here, I am sure you will love playing with it.
 
Your turn. What do you think?
 
Thanks a lot for reading. Did I miss anything that you may think is needed in this article? Did you find this post useful? Kindly do not forget to share your feedback.