Python SDK Deployment: Image Moderation for Content Safety

Introduction

In this article, you will learn how to implement an image moderation content safety application using Python SDK deployment. It provides a step-by-step guide to implementing image moderation using Python SDK in an easy manner which will enhance the developer productivity.

Prerequisites

  • Azure Subscription
  • Python 3.8 SDK or more
  • Azure AI Content Safety resource

Implementing Image Moderation Content Safety

Step 1. Install the Azure AI content safety package using the pip command.

pip install azure-ai-contentsafety

Step 2. Get the keys and endpoint from the content safety resource.

keys and endpoint

Step 3. Import the packages from Azure AI content safety.

from azure.ai.contentsafety import ContentSafetyClient

Step 4. Include the keys and endpoint from the Azure AI content safety resource Finally add the image path in the code.

contentsafety_key = "YOUR_KEY_HERE"
contentsafety_endpoint = "YOUR_ENDPOINT_HERE"
image_path = "YOUR_IMAGE_HERE"

Step 5. Create content safety client service using the ContentSafetyClient class.

#create contentsafetyclient service
contentsafety_client = ContentSafetyClient(contentsafety_endpoint , AzureKeyCredential(contentsafety_key ))

Step 6. Construct the content safety request from the AnalyzeImageOptions class.

#this step used to construct a content safety request
with open(image_path, "rb") as file:
    content_safety_request = AnalyzeImageOptions(image=ImageData(content=file.read()))

Step 7. Call the moderate_image function using the ContentSafetyClient class and print the error code and error message of text content using the HttpResponseError class.

#Moderation of image content
    try:
        contentsafety_response = contentsafety_client.moderate_image(contentsafety_request )
    except HttpResponseError as e:
        print("Moderation of image content was failed.")
        if e.error:
            print(f"Error code of image content: {e.error.code}")
            print(f"Error message of image content: {e.error.message}")
            raise
        print(e)
        raise

Step 8. Extract specific items from the response.categories_analysis based on their content categorization, If it able to check extracted items and then print the severity of the content.

response_hate = next(item for item in response.categories_analysis if item.category == ImageCategory.HATE)
response_self_harm = next(item for item in response.categories_analysis if item.category == ImageCategory.SELF_HARM)
response_sexual = next(item for item in response.categories_analysis if item.category == ImageCategory.SEXUAL)
response_violence = next(item for item in response.categories_analysis if item.category == ImageCategory.VIOLENCE)

Step 9. Printing the content categorization like hate, self-harm, sexual, and violent content.

if response_hate :
    print(f"Severity of hate content: {response_hate.severity}")
if response_self_harm:
    print(f"Severity of self harm content: {response_self_harm.severity}")
if response_sexual:
    print(f"Severity of sexual content: {response_sexual.severity}")
if response_violence:
    print(f"Severity of violence content: {response_violence.severity}")

Step 10. In the main function call the moderate_image function.

if __name__ == "__main__":
   moderate_image()

Source Image

Source Image

Output

This image content has been blocked due to the violence severity.

PS C:\\ > python sample.py

  • Hate severity: 0
  • SelfHarm severity: 0
  • Sexual severity: 0
  • Violence severity: 2

Summary

In this article, we have successfully learned and implemented the image content safety application using Python SDK deployment. Prompt Engineering is an essential skill in crafting effective prompts that enable us to implement and test different types of image content.

Happy Learning and see you soon in another interesting article!

For more articles, stay tuned here!


Similar Articles