Microsoft Azure AI Content Safety client library for Python
Azure AI Content Safety detects harmful user-generated and AI-generated content in applications and services. Content Safety includes text and image APIs that allow you to detect material that is harmful:
Text Analysis API: Scans text for sexual content, violence, hate, and self-harm with multi-severity levels. Image Analysis API: Scans images for sexual content, violence, hate, and self-harm with multi-severity levels. Text Blocklist Management APIs: The default AI classifiers are sufficient for most content safety needs; however, you might need to screen for terms that are specific to your use case. You can create blocklists of terms to use with the Text API. Please see https://aka.ms/azsdk/conda/releases/contentsafety for version details.