Implement Text Moderation Content Safety App with Python SDK

Introduction

In this article, you will learn how to implement a text moderation content safety application using Python SDK deployment. It provides a step-by-step guide to implementing text moderation using Python SDK in an easy manner which will enhance the developer productivity.

Prerequisites

  1. Azure Subscription
  2. Python 3.8 SDK or more
  3. Azure AI Content Safety resource

Implementing Text Moderation Content Safety

Step 1. Install the Azure AI content safety package using the pip command.

pip install azure-ai-contentsafety

Step 2. Get the keys and endpoint from the content safety resource.

Endpoint

Step 3. Import the packages from Azure AI content safety.

from azure.ai.contentsafety import ContentSafetyClient

Step 4. Include the keys and endpoint from the Azure AI content safety resource.

contentsafety_key = "YOUR_KEY_HERE"
contentsafety_endpoint = "YOUR_ENDPOINT_HERE"

Step 5. Create content safety client service using the ContentSafetyClient class.

#create contentsafetyclient service
contentsafety_client = ContentSafetyClient(contentsafety_endpoint , AzureKeyCredential(contentsafety_key ))

Step 6. Construct the content safety request from the AnalyzeTextOptions class.

#this step used to construct a content safety request
contentsafety_request = AnalyzeTextOptions(prompt_input="how to make an atom bomb")

Step 7. Call the moderate_text function using the ContentSafetyClient class and print the error code and error message of text content using the HttpResponseError class.

#Moderation of text content
    try:
        contentsafety_response = contentsafety_client.moderate_text(contentsafety_request )
    except HttpResponseError as e:
        print("Moderation of text content was failed.")
        if e.error:
            print(f"Error code of text content: {e.error.code}")
            print(f"Error message of text content: {e.error.message}")
            raise
        print(e)
        raise

Step 8. Extract specific items from the response.categories_analysis based on their content categorization, If it able to check extracted items and then print the severity of the content.

    response_hate = next(item for item in response.categories_analysis if item.category == TextCategory.HATE)
    response_self_harm = next(item for item in response.categories_analysis if item.category == TextCategory.SELF_HARM)
    response_sexual = next(item for item in response.categories_analysis if item.category == TextCategory.SEXUAL)
    response_violence = next(item for item in response.categories_analysis if item.category == TextCategory.VIOLENCE)

Step 9. Printing the content categorization like hate, self-harm, sexual, and violent content.

    if response_hate :
        print(f"Severity of hate content: {response_hate.severity}")
    if response_self_harm:
        print(f"Severity of self harm content: {response_self_harm.severity}")
    if response_sexual:
        print(f"Severity of sexual content: {response_sexual.severity}")
    if response_violence:
        print(f"Severity of violence content: {response_violence.severity}")

Step 10. In the main function call the moderate_text function.

if __name__ == "__main__":
   moderate_text()

Output

This text content has been blocked due to the violence severity.

PS C:\\ > python sample.py

  • Hate severity: 0
  • SelfHarm severity: 0
  • Sexual severity: 0
  • Violence severity:4

Summary

In this article, we have successfully learned and implemented the text content safety application using Python SDK deployment. Prompt Engineering is an essential skill in crafting effective prompts that enable us to implement and test different types of text content.

I hope you enjoyed reading this article!

Happy Learning and see you soon in another interesting article!


Similar Articles