Azure Content Moderator API Using Cognitive Service


Azure Content Moderator API is a cognitive service that checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. When such material is found, the service applies appropriate labels (flags) to the content. Mobile Application/ Website handles flagged content in order to comply with regulations or maintain the intended environment for users.
Users across the globe are generating billions of kilobytes of content and are publishing over the internet in various forums such as text, image, video, blog posts, reviews, feedback, etc.  Azure Content Moderator will help different domain companies as below
Company /Application For Use
Online Selling Online Shopping companies will moderate product catalogs, reviews and other user-generated content.
Gaming Gaming companies that moderate user-generated game artifacts and chat rooms
Social Messaging Social Messaging platforms that moderate images, text, and videos added by the users.
Media Enterprise media companies that implement centralized locations for the content
Education Solutions K-12 education solution providers filtering the content that is inappropriate for students and educators

Azure Moderation APIs

Content Moderator also checks for the possible personally identifiable information (PII). Each Text API call can contain up to 1,024 characters each. It can scan images (minimum of 128 pixels, maximum 4MB size) for adult and racy content, optical character recognition (OCR) and face detection. You can also match against custom image lists and custom text. Each API call is a transaction. Several web service APIs are available through both REST calls and a .NET SDK. It also includes the human review tool, which allows human reviewers to aid the service and improve or fine-tune its moderation function.
The Azure content Moderators API is published by the following means:

1. Text Moderation API

Scans text for offensive content, sexually explicit or suggestive content, profanity, and personal data.
Scans text against a custom list of terms in addition to the built-in terms. Uses custom lists to block or allow content according to your own/ company content policies.
You can make sure text can be at most 1024 characters long. If it’s not the text API will return an error code that informs that the text is longer than permitted

2. Image Moderation API

Scans images for adult or racy content, detects text in images with the Optical Character Recognition (OCR) capability and detects faces.
Scans images against a custom list of images. Uses custom image lists to filter out instances of commonly recurring content that you don't want to classify again. When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. if it’s not, the image API will return an error code that informs that the image does not meet the size requirements.

3. Video Moderation API

Scans videos for adult or racy content and returns time markers for said content.


I hope you have understood what Azure Content moderator and list of moderator APIs are. The next step is how to create the Azure API and implement the application using C#. Please leave your feedback/query using the comments box, and if you like this article, please share it with your friends.