AI Embracing a Safer Digital Experience - Azure AI Content Safety

Introduction

This article delves into the capabilities of Azure AI Content Safety, a comprehensive solution for detecting harmful user-generated and AI-generated content across various applications and services.

Azure AI Content Safety

Azure AI Content Safety is a new Azure AI Service that can detect harmful user-generated and AI-generated content. Azure Content Safety is a content moderation platform that detects inappropriate content in images, text, and multi-modal content.

This service is being integrated across MS products, including Azure OpenAI Service and Machine Learning prompt flow. Azure Content Safety enables businesses to harness the potential of Responsible AI, which helps to foster the creation of secure online spaces and communities.

Azure content safety

Azure AI Content Safety Product Types

  • Image Detection API: It allows the detection of text content, gets moderation results, and scans text for different content classifications with multi-severity levels.
  • Text Detection API: It allows the detection of text content, gets moderation results, and scans text for different content classifications with multi-severity levels.

Azure AI content safety product

Content Safety Studio Capabilities

Content Classification: Azure AI Content Safety can identify and classify harmful content into different categories, such as sexual, violent, self-harm, and hate.

 Category  Description
Hate The hate category describes language attacks, which include groups based on race, religion, and gender.
Sexual The sexual category describes language related to explicit content.
Violence The violence category describes language related to physical actions intended to hurt or damage someone.
Self-harm The self-harm category describes language related to physical actions intended to purposely hurt or damage.

Severity Scores: For unsafe content category on a scale of 0 - Safe, 2 - Low, 4 - Medium, and 6 - High.

Configure filters

Language Support: Multi-lingual models like English, German, French, Spanish, Italian, Japanese, Chinese, and Portuguese.

Content Safety Studio

We have the following steps to create a Content Safety Service in the Azure Portal.

Go to the Azure portal and sign in with your Azure account.

Search "Content Safety" in the search bar and select "Content Safety" from the search results.

Content safety

Click on the Create button for Content Safety.

Azure AI Services

In the Basics tab, provide the following information Choose the Subscription.

Then Create a Resource Group named testRG.

Choose the Region as East US and type the name as retailcontentsafety

Select the Pricing tier as Standard S0 tier.

Click the Next button on the Content Safety page.

Content safety page

Click the Review + Create button.

Once validation passed, you will be able to click the Create button.

Validation

Deployment started initializing in a minute or two this became successful.

Deployment started

Summary

In this article, we successfully learned and deployed the Content Safety Studio. We explored the different Content Safety Studio Capabilities like harm categories, severity levels, and scores. By making use of the Content Safety business can encourage responsible content generation, sharing, and empowering safe AI experiences.

I hope you enjoyed reading this article!

Happy Learning and see you soon in another interesting article!!


Similar Articles