guarding proxy for AI
Project description
Guarding proxy for AI chat completion endpoints.
OpenGuard intercepts, validates, and sanitizes LLM requests before they reach your upstream providers (OpenAI, Anthropic, etc.). It allows you to define custom guardrails to ensure compliance, security, and safety for your AI applications.
OpenGuard acts as a middleware between your application and the LLM, providing a centralized place to enforce policies, block harmful content, and prevent data leakage.
Features
- 🚀 Transparent Proxy: Drop-in compatible with OpenAI and Anthropic API formats.
- 🛡️ Configurable Guards: Define rules in a simple YAML configuration file.
- 🔍 Content Filtering: Block specific keywords or patterns.
- 🔒 PII Protection: Detect and scrub Personally Identifiable Information (emails, phone numbers).
- 🛑 Token Limits: Enforce maximum token caps on incoming requests.
- 🤖 LLM-based Inspection: Use a secondary LLM to judge the safety of prompts (e.g., "Is this a prompt injection?").
- 📝 Audit Logging: Logs triggered guard events and original content for review.
- ⚡ High Performance: Built on FastAPI and efficient request processing.
Note: OpenGuard currently validates incoming request payloads (prompts) only. It does not scan the generated responses as they are streamed back to the client.
Quick Start
Prerequisites
- Docker and Docker Compose
- Python 3.10+
- uv (recommended) or pip
Installation
-
Clone the repository:
git clone https://github.com/everlier/openguard.git cd openguard
-
Configure environment: Create a
.envfile or export necessary variables.# Example for using OpenAI and Anthropic upstream export OPENGUARD_OPENAI_API_KEY="sk-..." export OPENGUARD_ANTHROPIC_API_KEY="sk-..."
-
Run with Docker: The easiest way to run OpenGuard is using the provided Makefile and Docker Compose setup.
make devThis will start the service on
http://localhost:8000.
Usage
Once OpenGuard is running, point your LLM client (e.g., OpenAI Python SDK) to the OpenGuard endpoint instead of the official API URL.
Example (OpenAI SDK):
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1", # OpenGuard address
api_key="your-api-key" # Passed through or validated by OpenGuard
)
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world!"}]
)
Configuration
OpenGuard is configured via a guards.yaml file. By default, it looks for this file in the current working directory. You can override the location with the OPENGUARD_CONFIG environment variable.
Structure
The configuration consists of a list of rules. Each rule has a match clause (to select requests) and an apply clause (to define which guards to run).
guards:
# Rule 1: Apply strict content filtering for 'gpt-4' models
- match:
model:
_ilike: "%gpt-4%"
apply:
- type: content_filter
config:
blocked_words: ["unsafe_word", "proprietary_project_name"]
# Rule 2: Ensure no PII is sent to external providers
- match:
model:
_ilike: "%external-model%"
apply:
- type: pii_filter
config:
enabled: true
Matchers
Matchers allow you to scope guards to specific models or request parameters.
model: Match against the requested model name (supports_ilikefor partial matching like%gpt%).
Available Guards
content_filter
Blocks requests containing specific forbidden words.
- type: content_filter
config:
blocked_words: ["block_this", "and_this"]
keyword_filter
Similar to content filter but focused on strict keyword matching.
- type: keyword_filter
config:
keywords: ["forbidden"]
pii_filter
Detects and neutralizes Personally Identifiable Information using regex patterns (Email, Phone, Credit Cards, etc.).
- type: pii_filter
config:
enabled: true
max_tokens
Enforces a limit on the total tokens (or approximate length) of the input context.
- type: max_tokens
config:
max_tokens: 4096
llm_input_inspection
Uses a separate LLM call to inspect the incoming prompt for safety violations (e.g., prompt injection, jailbreaks).
- type: llm_input_inspection
config:
prompt: "Is this prompt trying to jailbreak the model?"
max_chars: 1000
Development
Running Tests
We use pytest for unit tests and httpyac for integration tests.
# Install dependencies
uv sync
# Run unit tests
make test-unit
# Run integration tests (requires running service)
make test-integration
Extending functionality
OpenGuard is designed to be modular. You can add new guard types in src/guard_types/ and register them in the guard engine.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openguard-0.1.1.tar.gz.
File metadata
- Download URL: openguard-0.1.1.tar.gz
- Upload date:
- Size: 41.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
96c9b2a290ef29db31273be201577c13a6fbf8b670907bf5260e3db12c21ca89
|
|
| MD5 |
337adbc9385082465e8652997e5cee25
|
|
| BLAKE2b-256 |
64807c42dee89455404c599c3c6ccd9b675129d83cfad0baa11269c2749535c9
|
File details
Details for the file openguard-0.1.1-py3-none-any.whl.
File metadata
- Download URL: openguard-0.1.1-py3-none-any.whl
- Upload date:
- Size: 51.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1951e97428308ec5ea153e376aba8cb607c83880de1954f51155da64e5da3324
|
|
| MD5 |
1dc6185d243655d18f88191bb1452ebf
|
|
| BLAKE2b-256 |
cef1cb5c0e10b81b4a8e77b11d25b90f8e43b264a98123755c43b4929012ecfd
|