Skip to main content

Helpful utilities for AI Sentinel toxicity detection

Project description

AI Sentinel

Python 3.11+ License: MIT

AI Sentinel is a Python package designed to help developers integrate toxicity analysis into their applications with ease. It provides a simple, unified interface to leverage powerful AI models for detecting and categorizing harmful content in text.

Key Features

  • Advanced Toxicity Detection: Comprehensive toxicity detection and classification.
  • Multiple LLM Providers: Designed to support various AI model providers (currently Azure OpenAI, and Gemini, with more models being supported in the future).
  • Structured Output: Type-safe responses with Pydantic validation.

Getting Started

Installation

ai-sentinel is avalible on PyPI

pip install ai-sentinel

Usage

AI Sentinel is designed to be straightforward to use. You'll primarily interact with the ToxicityGuard and a client specific to your chosen AI model (e.g., AzureOpenAIClient).

from ai_sentinel import AzureOpenAIClient, ToxicityGuard

# Initialize LLM client
client = AzureOpenAIClient(
    api_key="your-api-key",
    model="gpt-4o-mini",
    api_version="2024-02-01",
    azure_endpoint="https://your-resource.openai.azure.com/"
)

# Create toxicity guard
guard = ToxicityGuard(client)

# Analyze text
result = guard.analyze("This is a normal message")

print(f"Is toxic: {result.is_toxic}")
print(f"Confidence: {result.confidence}")
print(f"Categories: {result.categories}")
print(f"Reason: {result.reason}")
print(f"Severity: {result.score}")

Async Usage

Simply call the async analyze function analyze_async and use await with each API call:

import asyncio
from ai_sentinel import AzureOpenAIClient, ToxicityGuard

client = AzureOpenAIClient(
    api_key="your-api-key",
    model="gpt-4o-mini",
    api_version="2024-02-01",
    azure_endpoint="https://your-resource.openai.azure.com/"
)

async def main() -> None:
    guard = ToxicityGuard(client)
    response = await guard.analyze_async("Text to analyze")

    print(f"Is toxic: {result.is_toxic}")
    print(f"Confidence: {result.confidence}")
    print(f"Categories: {result.categories}")
    print(f"Reason: {result.reason}")
    print(f"Severity: {result.score}")

asyncio.run(main())

Supported LLM API Services

ai-sentinel is model agnostic, with support for the following LLM API services:

Provider Models Details
Azure OpenAI GPT-4, GPT-4o, GPT-3.5-turbo Industry-leading models
Google Gemini Gemini 2.0 Flash, Gemini 2.5 Pro Latest Google technology
[Anthropic][anthropic] To Be Implemented Will be implemented in the future
Open Source LLMs To Be Implemented Will be implemented in the future

Gemini Usage

from ai_sentinel import GeminiClient, ToxicityGuard

# Initialize Gemini client
client = GeminiClient(
    api_key="your-gemini-api-key",
    model="gemini-2.0-flash"
)

# Create and use toxicity guard
guard = ToxicityGuard(client)
result = guard.analyze("Your text here")

Output

In AI Sentinel's ToxicityGuard class, both analyze and analyze_async methods return a ToxicityResult object.

Response Format

class ToxicityResult:
    is_toxic: bool                          # Whether content is toxic
    confidence: float                       # Confidence score that the content is toxic (0.0-1.0)
    categories: List[ToxicityCategories]    # Detected toxicity categories
    reason: str                             # Explanation of the assessment
    score: ToxicityScore                    # Simplified confidence score: "low", "medium", "high"

Example

{
    "is_toxic": True,
    "confidence": 0.90,
    "categories": [
        <ToxicityCategories.THREATS: 'threats'>, 
        <ToxicityCategories.VIOLENCE: 'violence'>
    ],
    "reason": "The phrase 'I will punch you' is a clear and direct threat of physical violence. It expresses an intention to harm another person, categorizing it under threats and violence.",
    "score": <ToxicityScore.HIGH: 'high'>
}

The ToxicityCategories and ToxicityScore enums are available from ai_sentinel.models.

Toxicity Categories

AI Sentinel detects the following toxicity categories:

  • Hate Speech: Content attacking individuals/groups based on protected characteristics
  • Harassment: Hostile behavior targeting specific individuals
  • Threats: Direct or implied threats of violence or harm
  • Sexual Content: Inappropriate sexual material
  • Self Harm: Content promoting self-injury or suicide
  • Violence: Content glorifying or promoting violence
  • Bullying: Intimidation or aggressive behavior
  • Discrimination: Unfair treatment of specific groups

Configuration

Environment Variables

Tip: Add enviormental variables to a .env file

# Azure OpenAI
AZURE_API_KEY=your-azure-api-key
AZURE_API_VERSION=2024-02-01
AZURE_API_BASE=https://your-resource.openai.azure.com/

# Google Gemini
GEMINI_API_KEY=your-gemini-api-key

Using python-dotenv

from dotenv import load_dotenv
import os

load_dotenv()

client = AzureOpenAIClient(
    api_key=os.getenv("AZURE_API_KEY"),
    model="gpt-4o-mini",
    api_version=os.getenv("AZURE_API_VERSION"),
    azure_endpoint=os.getenv("AZURE_API_BASE")
)

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_sentinel-0.0.2.tar.gz (43.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_sentinel-0.0.2-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file ai_sentinel-0.0.2.tar.gz.

File metadata

  • Download URL: ai_sentinel-0.0.2.tar.gz
  • Upload date:
  • Size: 43.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.3

File hashes

Hashes for ai_sentinel-0.0.2.tar.gz
Algorithm Hash digest
SHA256 a3fd174938ee2e0b46fab0041b7e0020973e0fb462cd3adb0c3c3cdcc79bf08e
MD5 0afe4c64d5e0a3dd036f2889d6317776
BLAKE2b-256 8af5af5e31506c8d25bbcc0a09264bfa14b827fc7f6c496668a2ba9770c15d32

See more details on using hashes here.

File details

Details for the file ai_sentinel-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_sentinel-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 332f921188fac8cc5967bbc5782bf05a1bbcc1f0a4c8727a9dfd7a7b3318a9f7
MD5 c9ec91a790f0f643ee78a027011a2db5
BLAKE2b-256 74a58b5a2b6bbbe0355d7aa594a2af40245c8267f32dca6f5ad882d48e2aa828

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page