Xiangxin AI guardrails Python SDK - An LLM-based context-aware AI guardrail that understands conversation context for security, safety and data leakage detection.
Project description
Xiangxin AI Guardrails Python Client
An LLM-based context-aware AI guardrail that understands conversation context for security, safety and data leakage detection.
Features
- 🧠 Context Awareness – Based on LLM conversation understanding rather than simple batch detection
- 🔍 Prompt Injection Detection – Detects malicious prompt injections and jailbreak attacks
- 📋 Content Compliance Detection – Complies with generative AI safety requirements
- 🔐 Sensitive Data Leak Prevention – Detects and prevents personal or corporate data leaks
- 🧩 User-level Ban Policy – Supports user-granular risk recognition and blocking strategies
- 🖼️ Multimodal Detection – Supports image content safety detection
- 🛠️ Easy Integration – OpenAI-compatible API format; plug in with one line of code
- ⚡ OpenAI-style API – Familiar interface design for rapid adoption
- 🚀 Sync/Async Support – Supports both synchronous and asynchronous calls for different scenarios
Installation
pip install xiangxinai
Quick Start
Basic Usage
from xiangxinai import XiangxinAI
# Create a client
client = XiangxinAI(
api_key="your-api-key",
base_url="https://api.xiangxinai.cn/v1" # Cloud API
)
# Check user input
result = client.check_prompt("I want to learn Python programming", user_id="user-123")
print(result.suggest_action) # Output: pass
print(result.overall_risk_level) # Output: no_risk
print(result.score) # Confidence score, e.g. 0.9993114447238793
# Check model response (context-aware)
result = client.check_response_ctx(
prompt="Teach me how to cook",
response="I can teach you some simple home dishes",
user_id="user-123" # Optional user-level risk control
)
print(result.suggest_action) # Output: pass
print(result.overall_risk_level) # Output: no_risk
Context-Aware Detection (Core Feature)
# Context-based conversation detection - Core feature
messages = [
{"role": "user", "content": "I want to learn chemistry"},
{"role": "assistant", "content": "Chemistry is an interesting subject. What part would you like to learn?"},
{"role": "user", "content": "Teach me reactions for making explosives"}
]
result = client.check_conversation(messages, user_id="user-123")
print(result.overall_risk_level)
print(result.suggest_action) # Result based on full conversation context
if result.suggest_answer:
print(f"Suggested answer: {result.suggest_answer}")
Asynchronous API (Recommended)
import asyncio
from xiangxinai import AsyncXiangxinAI
async def main():
async with AsyncXiangxinAI(api_key="your-api-key") as client:
# Async prompt check
result = await client.check_prompt("I want to learn Python programming")
print(result.suggest_action) # Output: pass
# Async conversation context check
messages = [
{"role": "user", "content": "I want to learn chemistry"},
{"role": "assistant", "content": "Chemistry is an interesting subject. What part would you like to learn?"},
{"role": "user", "content": "Teach me reactions for making explosives"}
]
result = await client.check_conversation(messages)
print(result.overall_risk_level)
asyncio.run(main())
Concurrent Processing
import asyncio
from xiangxinai import AsyncXiangxinAI
async def batch_check():
async with AsyncXiangxinAI(api_key="your-api-key") as client:
# Handle multiple requests concurrently
tasks = [
client.check_prompt("Content 1"),
client.check_prompt("Content 2"),
client.check_prompt("Content 3")
]
results = await asyncio.gather(*tasks)
for i, result in enumerate(results):
print(f"Content {i+1}: {result.overall_risk_level}")
asyncio.run(batch_check())
Multimodal Image Detection
Supports multimodal detection for image content safety. The system analyzes both text prompt semantics and image semantics for risk.
from xiangxinai import XiangxinAI
client = XiangxinAI(api_key="your-api-key")
# Check a single local image
result = client.check_prompt_image(
prompt="Is this image safe?",
image="/path/to/image.jpg"
)
print(result.overall_risk_level)
print(result.suggest_action)
# Check an image from URL
result = client.check_prompt_image(
prompt="", # prompt can be empty
image="https://example.com/image.jpg"
)
# Check multiple images
images = [
"/path/to/image1.jpg",
"https://example.com/image2.jpg",
"/path/to/image3.png"
]
result = client.check_prompt_images(
prompt="Are all these images safe?",
images=images
)
print(result.overall_risk_level)
Async version:
import asyncio
from xiangxinai import AsyncXiangxinAI
async def check_images():
async with AsyncXiangxinAI(api_key="your-api-key") as client:
# Async check for a single image
result = await client.check_prompt_image(
prompt="Is this image safe?",
image="/path/to/image.jpg"
)
print(result.overall_risk_level)
# Async check for multiple images
images = ["/path/to/image1.jpg", "/path/to/image2.jpg"]
result = await client.check_prompt_images(
prompt="Are these images safe?",
images=images
)
print(result.overall_risk_level)
asyncio.run(check_images())
On-Premise Deployment
# Sync client connecting to local deployment
client = XiangxinAI(
api_key="your-local-api-key",
base_url="http://localhost:5000/v1"
)
# Async client connecting to local deployment
async with AsyncXiangxinAI(
api_key="your-local-api-key",
base_url="http://localhost:5000/v1"
) as client:
result = await client.check_prompt("Test content")
API Reference
XiangxinAI Class (Synchronous)
Initialization Parameters
api_key(str): API keybase_url(str): Base API URL, defaults to the cloud endpointtimeout(int): Request timeout, default 30 secondsmax_retries(int): Maximum retry count, default 3
Methods
check_prompt(content: str, user_id: Optional[str] = None) -> GuardrailResponse
Checks the safety of a single prompt.
Parameters:
content: Text content to be checkeduser_id: Optional tenant user ID for per-user risk control and auditing
Returns: GuardrailResponse object
check_conversation(messages: List[Message], model: str = "Xiangxin-Guardrails-Text", user_id: Optional[str] = None) -> GuardrailResponse
Checks conversation context safety (core feature).
Parameters:
messages: List of messages, each containingroleandcontentmodel: Model name (default: "Xiangxin-Guardrails-Text")user_id: Optional tenant user ID
Returns: GuardrailResponse object
AsyncXiangxinAI Class (Asynchronous)
Same initialization parameters as the synchronous version.
Methods
async check_prompt(content: str) -> GuardrailResponse
Asynchronously checks a single prompt.
async check_conversation(messages: List[Message]) -> GuardrailResponse
Asynchronously checks conversation context safety (core feature).
async health_check() -> Dict[str, Any]
Checks API service health.
async get_models() -> Dict[str, Any]
Retrieves available model list.
async close()
Closes async session (automatically handled with async with).
GuardrailResponse Class
Represents detection results.
Attributes
id: Unique request IDresult.compliance.risk_level: Compliance risk levelresult.security.risk_level: Security risk levelresult.data.risk_level: Data leak risk level (added in v2.4.0)result.data.categories: Detected sensitive data types (added in v2.4.0)overall_risk_level: Overall risk level (none / low / medium / high)suggest_action: Suggested action (pass / block / substitute)suggest_answer: Suggested response (optional, includes redacted content if applicable)score: Confidence score of the results
Helper Methods
is_safe: Whether the content is safeis_blocked: Whether the content is blockedhas_substitute: Whether a substitute answer is providedall_categories: Get all detected risk categories
Safety Detection Capabilities
Risk Levels
- High Risk: Sensitive political topics, national image damage, violent crime, prompt attacks
- Medium Risk: General political topics, harm to minors, illegal acts, sexual content
- Low Risk: Hate speech, insults, privacy violations, commercial misconduct
- No Risk: Safe content
Handling Strategies
- High Risk: Recommend blocking
- Medium Risk: Recommend substitution with a safe reply
- Low Risk: Recommend substitution or business-dependent handling
- No Risk: Recommend pass
Error Handling
Synchronous Error Handling
from xiangxinai import XiangxinAI, AuthenticationError, ValidationError, RateLimitError
try:
result = client.check_prompt("Test content")
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Input validation failed: {e}")
except RateLimitError:
print("Rate limit exceeded")
except Exception as e:
print(f"Other error: {e}")
Asynchronous Error Handling
import asyncio
from xiangxinai import AsyncXiangxinAI, AuthenticationError, ValidationError, RateLimitError
async def safe_check():
try:
async with AsyncXiangxinAI(api_key="your-api-key") as client:
result = await client.check_prompt("Test content")
return result
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Input validation failed: {e}")
except RateLimitError:
print("Rate limit exceeded")
except Exception as e:
print(f"Other error: {e}")
asyncio.run(safe_check())
Development
# Clone the project
git clone https://github.com/xiangxinai/xiangxin-guardrails
cd xiangxin-guardrails/client
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Code formatting
black xiangxinai
isort xiangxinai
# Type checking
mypy xiangxinai
License
This project is open-sourced under the Apache 2.0 license.
Support
- 📧 Technical Support: wanglei@xiangxinai.cn
- 🌐 Official Website: https://xiangxinai.cn
- 📖 Documentation: https://docs.xiangxinai.cn
- 🐛 Issue Tracker: https://github.com/xiangxinai/xiangxin-guardrails/issues
Made with ❤️ by Xiangxin AI
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file xiangxinai-2.6.2.tar.gz.
File metadata
- Download URL: xiangxinai-2.6.2.tar.gz
- Upload date:
- Size: 20.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
20adfcd21ccd1301b1cea47a143804a7f5aa217e1af85a56590ddfda3a1fdc0f
|
|
| MD5 |
4bf2b85e22ef314dbcfabbd834b864b8
|
|
| BLAKE2b-256 |
1d7e6dc28e6fd9c11e7a0ffba0bfc50c1a56da73db567d5674f67994456ca652
|
Provenance
The following attestation bundles were made for xiangxinai-2.6.2.tar.gz:
Publisher:
publish.yml on xiangxinai/xiangxinai-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
xiangxinai-2.6.2.tar.gz -
Subject digest:
20adfcd21ccd1301b1cea47a143804a7f5aa217e1af85a56590ddfda3a1fdc0f - Sigstore transparency entry: 597056415
- Sigstore integration time:
-
Permalink:
xiangxinai/xiangxinai-python@099bae83147ebf61dba2f0add6b9224aa5d4da94 -
Branch / Tag:
refs/tags/v2.6.2 - Owner: https://github.com/xiangxinai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@099bae83147ebf61dba2f0add6b9224aa5d4da94 -
Trigger Event:
release
-
Statement type:
File details
Details for the file xiangxinai-2.6.2-py3-none-any.whl.
File metadata
- Download URL: xiangxinai-2.6.2-py3-none-any.whl
- Upload date:
- Size: 17.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
786436ba42e50f3c7eda3b277e8ddc949b7713f6ddc8bfab1cc6fa4834812ce0
|
|
| MD5 |
daf1f0022ad186f7c2eaccb4256d66bb
|
|
| BLAKE2b-256 |
cb1c1173d57d71b927e1c1bb2789f1a6a490db7ab3031a573c0393efbefe4370
|
Provenance
The following attestation bundles were made for xiangxinai-2.6.2-py3-none-any.whl:
Publisher:
publish.yml on xiangxinai/xiangxinai-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
xiangxinai-2.6.2-py3-none-any.whl -
Subject digest:
786436ba42e50f3c7eda3b277e8ddc949b7713f6ddc8bfab1cc6fa4834812ce0 - Sigstore transparency entry: 597056420
- Sigstore integration time:
-
Permalink:
xiangxinai/xiangxinai-python@099bae83147ebf61dba2f0add6b9224aa5d4da94 -
Branch / Tag:
refs/tags/v2.6.2 - Owner: https://github.com/xiangxinai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@099bae83147ebf61dba2f0add6b9224aa5d4da94 -
Trigger Event:
release
-
Statement type: