Python SDK for Watchlight Guardrails — AI content safety and guardrails-as-code engine
Project description
WL-Guardrails Python SDK
Python SDK for Watchlight Guardrails — AI content safety and guardrails-as-code engine. Check input and output content against configurable safety policies with support for blocking, sanitization, and fail-open/fail-closed modes.
Features
- Input/Output Checking: Validate user messages and LLM responses against safety policies
- Async/Sync Support: Both async and sync HTTP clients
- Fail-Open/Fail-Closed: Configurable behavior when the service is unavailable
- Content Sanitization: Automatically redact PII and sensitive content
- Typed Models: Pydantic v2 models for all request/response types
- Observable: Structured logging with request correlation IDs
Installation
pip install wl-guardrails
Quick Start
Async Client
from wl_guardrails import WlGuardrailsClient, GuardrailBlockError
async with WlGuardrailsClient("http://localhost:8083") as client:
try:
result = await client.check_input("User message here")
# Content passed all checks
print(f"Checks run: {result.checks_run}")
except GuardrailBlockError as e:
print(f"Blocked: {e.error_code} - {e.guardrail_message}")
Sync Client
from wl_guardrails import WlGuardrailsSyncClient, GuardrailBlockError
with WlGuardrailsSyncClient("http://localhost:8083") as client:
# Check LLM output before returning to user
result = client.check_output(llm_response)
if result.is_sanitized:
# Use the sanitized version
safe_response = result.sanitized
elif result.is_passed:
safe_response = llm_response
Content Checking
Check Input
Validate user/agent messages before processing:
result = await client.check_input(
content="user message",
request_id="trace-123", # Optional correlation ID
metadata={"agent_id": "a1"}, # Optional policy context
)
Check Output
Validate LLM responses before returning to the user:
result = await client.check_output(
content=llm_response,
request_id="trace-123",
)
if result.is_sanitized:
# PII or sensitive content was redacted
return result.sanitized
Check Results
Every check returns a CheckResult:
result.action # GuardrailAction: PASS, BLOCK, or SANITIZE
result.is_passed # True if content passed all checks
result.is_blocked # True if content was blocked
result.is_sanitized # True if content was sanitized
result.violations # List of Violation objects
result.sanitized # Sanitized content (if action is SANITIZE)
result.checks_run # Names of checks that were executed
result.request_id # Correlation ID for tracing
Fail Modes
Configure behavior when the guardrails service is unavailable:
# Fail-open (default): proceed without guardrails on service errors
client = WlGuardrailsClient(fail_mode="open")
# Fail-closed: raise ServiceUnavailable on service errors
client = WlGuardrailsClient(fail_mode="closed")
Or set via environment variable:
export WL_GUARDRAILS_FAIL_MODE=closed
Configuration
| Environment Variable | Default | Description |
|---|---|---|
WL_GUARDRAILS_URL |
http://localhost:8083 |
Guardrails service URL |
WL_GUARDRAILS_FAIL_MODE |
open |
Fail mode: open or closed |
Error Handling
from wl_guardrails import (
GuardrailBlockError, # Content was blocked by a policy
ServiceUnavailable, # Service is unreachable (fail_mode="closed")
ValidationError, # Invalid request (e.g., empty content)
WlGuardrailsError, # Base exception for all SDK errors
)
Health Checks
# Simple boolean check
is_healthy = await client.health()
# Detailed health info
health = await client.get_health()
print(f"Status: {health.status}, Policies loaded: {health.policies_loaded}")
Requirements
- Python 3.10+
- WL-Guardrails service running
License
Proprietary. See LICENSE for details.
Support
- Partner Portal: https://www.watchlight.ai/partner
- Email: team@watchlight.ai
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wl_guardrails-0.1.0.tar.gz.
File metadata
- Download URL: wl_guardrails-0.1.0.tar.gz
- Upload date:
- Size: 9.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cb48550381caf30952148b821e8be3e046f9994ed1c4ba6e04ca381b81b25df8
|
|
| MD5 |
2b91bbe0726c0728f9188318abffcb22
|
|
| BLAKE2b-256 |
2011f5b800cff322ff25644b2169065c380629f9b615c30cbde2c26af76ddcc6
|
File details
Details for the file wl_guardrails-0.1.0-py3-none-any.whl.
File metadata
- Download URL: wl_guardrails-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e708f170fd512d50578da2e4c306eb8f5c07344781c6a3ea8e6305a2cb5e281f
|
|
| MD5 |
d6bd90ee7c5a3d4da68b6aa44a21fec0
|
|
| BLAKE2b-256 |
60d97313526ea1c93c4b40289903f879a483cb51f991e89a6e19f158e87322d1
|