Analyzing Image Moderation using Azure Content Safety

Introduction

In this article, you will learn to build and analyze image moderation with different data sets. This experience shows you a quick demo about how to analyze image moderation and how to moderate image content in real time.

Steps for Creating Content Safety Resource

Step 1. Go to the Azure portal and sign in with your Azure account.

Step 2. Search "Content Safety" in the search bar and select "Content Safety" from the search results.

Step 3. Click on the Create button for Content Safety.

Step 4. First, choose a subscription and then create a Resource Group named testRG, and type the resource name as retailcontentsafe.

Resource Group

Step 5. Click the Review+Create button.

Step 6. Finally, Click the Create button.

Create button

Step 7. Deployment started in progress in a minute or two deployment will become successful.

Deployment

Step 8. You can see deployment is completed successfully.

Deployment is completed

Get Started with Content Safety Studio

Step 1. Click Content Safety Studio from the Azure Portal.

Step 2. Now you are going to run a moderation test related to the Images.

Step 3. Click Try it out for Moderate Images Content.

Moderate Images

Safe Content Scenario

Step 4. Click Run a Simple Test and choose the Safe Content option from the moderate image content.

 Simple Test

Step 5. Prompt - An image of two children holding hands and smiling at sunset.

Step 6. Click the run test button and it's going to be determined whether the status of the prompt is allowed or blocked in the Judgement option.

 Test button

Step 7. In the View results section, Prompt content has been allowed and you can also see the category and risk level detection results below.

Detection results

Self-harm Content Scenario

Step 1. Click Run a Simple Test and choose the Self-harm content option from the moderate image content.

Self-harm content

Step 2. Prompt - An image of a girl holding a firearm with the intention of self-harm.

Step 3. Click the run test button and it's going to be determined whether the status of the prompt is allowed or blocked in the Judgement option.

Determined whether

Step 4. In the View results section, Prompt content has been blocked and you can also see the category and risk level detection results below.

Prompt content

Run a bulk test Scenario

Step 1. Click Run a Bulk Test and choose the dataset with a safe content option from the moderate image content.

Step 2. In this context I have taken a dataset with 15 records and with corresponding label 0 indicates its safe content.

Bulk Test

 Safe content

Step 3. Click the run test button and it's going to be determined whether the status of the prompt is allowed or blocked in the Judgement option.

Step 4. In the View results section, Prompt content has been allowed and you can also see the category and risk level detection results below.

 Risk level detection

Summary

In this article, we successfully learned and deployed the Content Safety Studio. We explored the different Content Safety Studio Capabilities like moderating image content with different real-time scenarios namely safe content, self-harm content, running a simple test, and running bulk test scenarios.

I hope you enjoyed reading this article!

Happy Learning and see you soon in another interesting article!!


Similar Articles