Enterprise-grade LLM security framework with 40+ scanners and programmable guardrails
Project description
UltraGuard
Enterprise-Grade LLM Security Framework
A comprehensive security toolkit providing 40+ security scanners and programmable guardrails for LLM-powered applications. Built by 100XPrompt.
Features
- 40+ Security Scanners - Comprehensive input/output scanning for prompt injection, secrets, PII, toxicity, and more
- Programmable Guardrails - Define conversation flows and safety boundaries with Colang DSL
- Flexible Rails System - Input, output, dialog, retrieval, and execution rails
- 23 Pre-built Libraries - Ready-to-use security libraries for various use cases
- Streaming Support - Real-time streaming with guardrails enforcement
- 8 Embedding Providers - OpenAI, Azure, Cohere, Google, FastEmbed, SentenceTransformers, and more
- LangChain Integration - Seamless integration with LangChain runnables
- Built-in Caching - LFU cache for improved performance
- Distributed Tracing - OpenTelemetry-compatible tracing for observability
- Async-First Design - Native async/await support throughout
- REST API Server - Production-ready FastAPI server
- CLI Tools - Command-line interface for common operations
Installation
pip install ultraguard
For additional features:
pip install ultraguard[onnx] # ONNX runtime for ML models
pip install ultraguard[embeddings] # Embedding providers
pip install ultraguard[eval] # Evaluation UI
pip install ultraguard[all] # All optional dependencies
Quick Start
Basic Scanning
from ultraguard.scanner.scanners.input import Toxicity, TokenLimit, Secrets
from ultraguard.scanner.scanners.output import JSON, Bias
# Input scanning
secrets = Secrets()
result = secrets.scan("My API key is sk-abc123...")
print(f"Valid: {result.is_valid}, Risk: {result.risk_score}")
# Output scanning
json_scanner = JSON()
result = json_scanner.scan("prompt", '{"key": "value"}')
print(f"Valid: {result.is_valid}")
LLMRails - Full Pipeline
from ultraguard import LLMRails, action
from ultraguard.config import RailsConfig
# Configure guardrails
config = RailsConfig(
input_scanners=["toxicity", "secrets", "prompt_injection"],
output_scanners=["bias", "json", "sensitive"],
)
rails = LLMRails(config=config)
# Use with async
async def main():
result = await rails.generate_async([
{"role": "user", "content": "Hello, how are you?"}
])
print(result.text)
import asyncio
asyncio.run(main())
Custom Actions
from ultraguard import LLMRails, action
@action()
async def check_greeting(text: str) -> dict:
greetings = ['hello', 'hi', 'hey']
is_greeting = any(g in text.lower() for g in greetings)
return {'is_valid': True, 'is_greeting': is_greeting}
rails = LLMRails(config={'input_scanners': ['greeting']})
rails.register_action(check_greeting)
Colang DSL - Conversation Flows
from ultraguard import LLMRails
colang_config = """
define user express greeting
"Hello!"
"Hi there!"
define flow
user express greeting
bot express greeting
define bot express greeting
"Hello! How can I assist you today?"
"""
rails = LLMRails.from_colang(colang_config)
result = rails.generate([{'role': 'user', 'content': 'Hello!'}])
Scanners
Input Scanners (18 scanners)
| Scanner | Description |
|---|---|
Anonymize |
Detect and mask PII/sensitive data |
BanCode |
Prevent code snippets in input |
BanCompetitors |
Block competitor names/mentions |
BanSubstrings |
Block specific substrings |
BanTopics |
Restrict specific topics |
Code |
Detect and analyze code content |
EmotionDetection |
Detect emotional content |
Gibberish |
Detect nonsensical input |
InvisibleText |
Detect hidden/invisible characters |
Language |
Identify input language |
PromptInjection |
Detect prompt injection attacks |
ReadingTime |
Estimate reading time |
Regex |
Pattern-based detection |
Secrets |
Detect API keys, tokens, passwords (160+ patterns) |
Sentiment |
Analyze sentiment of input |
TokenLimit |
Enforce token count limits |
Toxicity |
Detect toxic/harmful content |
Output Scanners (22 scanners)
| Scanner | Description |
|---|---|
BanCode |
Prevent code in outputs |
BanCompetitors |
Block competitor mentions |
BanSubstrings |
Filter specific substrings |
BanTopics |
Restrict topics in output |
Bias |
Detect biased content |
Code |
Analyze code in output |
Deanonymize |
Restore masked PII |
EmotionDetection |
Detect emotional output |
FactualConsistency |
Check factual accuracy |
Gibberish |
Detect nonsensical output |
JSON |
Validate JSON structure |
Language |
Identify output language |
LanguageSame |
Ensure input/output language match |
MaliciousURLs |
Detect malicious URLs |
NoRefusal |
Check for refusal patterns |
Regex |
Pattern-based output filtering |
Relevance |
Check output relevance |
Sensitive |
Detect sensitive information |
Sentiment |
Analyze output sentiment |
Toxicity |
Detect toxic output |
URLReachability |
Validate URL accessibility |
Pre-built Libraries (23 libraries)
UltraGuard includes production-ready security libraries:
| Library | Purpose |
|---|---|
content_safety |
Content safety classification |
injection_detection |
Prompt injection detection |
jailbreak_detection |
Jailbreak attempt detection |
sensitive_data |
PII detection and masking |
factchecking |
Fact verification |
hallucination |
Hallucination detection |
topic_safety |
Topic-based safety |
self_check |
Self-consistency checks |
llama_guard |
Meta Llama Guard integration |
guardrails_ai |
Guardrails AI integration |
gliner |
GLiNER NER models |
pangea |
Pangea security services |
privateai |
PrivateAI integration |
prompt_security |
Prompt security scanning |
activefence |
ActiveFence integration |
ai_defense |
AI Defense integration |
autoalign |
AutoAlign integration |
cleanlab |
Cleanlab integration |
fiddler |
Fiddler AI integration |
gcp_moderate_text |
Google Cloud moderation |
patronusai |
Patronus AI integration |
trend_micro |
Trend Micro integration |
Configuration
YAML Configuration
# config.yml
models:
- type: main
engine: openai
model: gpt-4
rails:
input:
flows:
- check jailbreak
- mask sensitive data
scanners:
- secrets
- prompt_injection
- toxicity
output:
flows:
- self check facts
- self check hallucination
scanners:
- sensitive
- toxicity
- bias
config:
sensitive_data_detection:
input:
entities:
- PERSON
- EMAIL_ADDRESS
- PHONE_NUMBER
- CREDIT_CARD
Python Configuration
from ultraguard import LLMRails
from ultraguard.config import RailsConfig
config = RailsConfig(
input_scanners=['secrets', 'prompt_injection', 'toxicity'],
output_scanners=['sensitive', 'bias', 'toxicity'],
cache_enabled=True,
tracing_enabled=True,
)
rails = LLMRails(config=config)
CLI
# Scan text
ultraguard scan "Check this text" --scanners toxicity,secrets
# Start API server
ultraguard server --port 8000
# Interactive chat
ultraguard chat --config ./config
# Run evaluation
ultraguard eval --dataset ./tests/data
# List available scanners
ultraguard list-scanners
REST API
Start the server:
ultraguard server --port 8000
Endpoints
# Health check
GET /health
# Scan text
POST /scan
{
"text": "Check this text",
"scanners": ["toxicity", "secrets"]
}
# Chat completions
POST /v1/chat/completions
{
"messages": [
{"role": "user", "content": "Hello!"}
]
}
Advanced Usage
Streaming with Guardrails
from ultraguard import LLMRails
rails = LLMRails(config={'input_scanners': ['prompt_injection']})
async def stream_response():
async for chunk in rails.stream_async([
{'role': 'user', 'content': 'Tell me a story'}
]):
print(chunk, end='', flush=True)
import asyncio
asyncio.run(stream_response())
LangChain Integration
from langchain_openai import ChatOpenAI
from ultraguard import LLMRails
from ultraguard.integrations.langchain import RunnableUltraGuard
llm = ChatOpenAI(model='gpt-4')
rails = LLMRails(
config={'output_scanners': ['sensitive', 'toxicity']},
llm=llm,
)
# Use as a Runnable
chain = RunnableUltraGuard(rails)
result = chain.invoke([{'role': 'user', 'content': 'Hello!'}])
Distributed Tracing
from ultraguard.tracing import TracingConfig, TracingProvider, configure_tracing
config = TracingConfig(
enabled=True,
provider=TracingProvider.OPENTELEMETRY,
endpoint='http://localhost:4317',
)
configure_tracing(config)
Embedding Providers
from ultraguard.embeddings import OpenAIEmbeddings, SentenceTransformersEmbeddings
# OpenAI embeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectors = await embeddings.embed(["Hello world"])
# Sentence Transformers (local)
embeddings = SentenceTransformersEmbeddings(model="all-MiniLM-L6-v2")
vectors = await embeddings.embed(["Hello world"])
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Development Setup
# Clone the repository
git clone https://github.com/Nipurn123/UltraGuard.git
cd UltraGuard
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Run linting
ruff check .
ruff format .
License
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ by 100XPrompt
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ultraguard-0.1.5.tar.gz.
File metadata
- Download URL: ultraguard-0.1.5.tar.gz
- Upload date:
- Size: 251.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a18b8ecae1124a58ea4536bed56f348a51f02d6ecf2ced28ebe83a29e1d5343a
|
|
| MD5 |
d890e8969cc35e768d3d9e19f32ec034
|
|
| BLAKE2b-256 |
e32d73d669fd2156943d9482a7a0fa099abd33614d3acb7efbfb6ce91da318bf
|
File details
Details for the file ultraguard-0.1.5-py3-none-any.whl.
File metadata
- Download URL: ultraguard-0.1.5-py3-none-any.whl
- Upload date:
- Size: 325.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
09752cac2a2b88d48ad6a867e8d459a136d61b12bfbe70ffa61429d655253f89
|
|
| MD5 |
2133befef0542e31681122fc2966e447
|
|
| BLAKE2b-256 |
17158a9d771632567525ccf846ef33e927c0f80a1b049e6c3e82318ec2d407d3
|