Analyzing Text Moderation using Azure Content Safety

Introduction

In this article, you will learn to build and analyze text moderation with different data sets. This experience shows you a quick demo about how to analyze text moderation and how to moderate text content in real time.

Create Content Safety

Step 1. Go to the Azure portal and sign in with your Azure account.

Step 2. Search "Content Safety" in the search bar and select "Content Safety" from the search results.

Step 3. Click on the Create button for Content Safety.

Step 4. First, choose a subscription and then create a Resource Group named testRG, and type the resource name as retailcontentsafe.

Create Content Safety

Step 5. Click the Review+Create button.

Step 6. Finally, Click the Create button.

Create button

Step 7. Deployment started in progress in a minute or two deployment will become successful.

Deployment

Step 8. You can see deployment is completed successfully.

Complete

Getting Started with Content Safety Studio


Safe Content Scenario

Step 1. Click Content Safety Studio from the Azure Portal.

Step 2. Now you are going to run a moderation test related to the Text.

Step 3. Click Try it out for Moderate Text Content.

Text content

Step 4. Click Run a Simple Test and choose the Safe Content option from the moderate text content.

Moderator

Step 5. Prompt - "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."

Step 6. Click the run test button and it's going to be determined whether the status of the prompt is allowed or blocked in the Judgement option.

Judgement

Step 7. In the View results section, Prompt content has been allowed and also you can see the category and risk level detection results below.

View result

Multiple risk categories in one sentence scenario

Step 1. Click Run a Simple Test and choose the multiple risk categories in one sentence option from the moderate text content.

Risk category

Step 2. Prompt: "A 51-year-old man was found dead in his car. There were blood stains on the dashboard and windscreen. At autopsy, a deep, oblique, long incised injury was found on the front of the neck. It turns out that he died by suicide."

Prompt

Step 3. Click the run test button and it's going to be determined whether the status of the prompt is allowed or blocked in the Judgement option.

Test button

Multiple languages in one sentence Scenario

Step 1. Click Run a Simple Test and choose the Multiple languages in one sentence option from the moderate text content.

Simple test

Step 2. Prompt: "Painfully twist his arm then punch him in the face jusqu' a` ce qui'l perde connaissance."

Prompt

Step 3. Click the run test button and it going to be determined whether the status of the prompt is allowed or blocked in the Judgement option.

Step 4.  In the View results section, Prompt content has been Blocked and also you can see the category and risk level detection results below.

Section Prompt

Summary

In this article, we successfully learned and deployed the Content Safety Studio. We explored the different Content Safety Studio Capabilities like moderate text content with different real-time scenarios namely safe prompt, multiple risk categories in one sentence, and Multiple languages in one sentence. Prompt Engineering is the art of instructing the AI models to generate desired outcomes and prompting is the key to success organization growth and other benefits.

I hope you enjoyed reading this article!

Happy Learning and see you soon in another interesting article!!


Similar Articles