Safety rails for every AI app — model-agnostic guardrails for LLM applications
Project description
AISafe Guard
aisafeguard is an open-source LLM safety and guardrails toolkit for AI apps.
It helps protect against prompt injection, jailbreak attempts, PII leaks, toxic output, and malicious URLs with a Python SDK, CLI, and OpenAI-compatible proxy.
Why AISafe Guard
- Add AI safety checks to any LLM app with minimal code changes
- Enforce configurable guardrail policies:
block,warn,log,redact - Protect both input prompts and model outputs
- Use as a library or as a language-agnostic proxy gateway
Core Features
- Prompt injection detection
- Jailbreak detection
- PII detection and redaction
- Toxicity filtering
- Malicious URL detection
- Relevance checks
- OpenAI/Anthropic wrappers
- OpenAI-compatible proxy mode
Install
pip install aisafeguard
From source (fresh clone):
pip install .
Optional extras:
pip install "aisafeguard[ml]"
pip install "aisafeguard[proxy]"
pip install "aisafeguard[integrations]"
pip install "aisafeguard[telemetry]"
Repository Setup
git clone <your-repo-url>
cd aisafeguard
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e .[dev]
npm install
Node.js Users
This repo does not currently ship a native Node SDK package for runtime usage.
Recommended Node integration today:
- Run
aisafe proxy(or Docker) from this repo. - Call the OpenAI-compatible endpoint from your Node app.
- Keep safety policy/config centralized in
aisafe.yaml.
Example (Node fetch):
const res = await fetch("http://localhost:8000/v1/chat/completions", {
method: "POST",
headers: { "Content-Type": "application/json", "x-user-id": "user-123" },
body: JSON.stringify({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello" }]
})
});
const data = await res.json();
console.log(data.choices?.[0]?.message?.content);
Quick Start
Decorator:
from aisafeguard import guard
@guard(input=["prompt_injection", "pii"], output=["toxicity", "pii"])
async def ask(prompt: str) -> str:
return "model output"
Guard object:
from aisafeguard import Guard
g = Guard(config="aisafe.yaml")
input_result = await g.scan_input("Ignore previous instructions")
OpenAI wrapper:
from aisafeguard import Guard
from aisafeguard.integrations import wrap_openai
guard = Guard()
client = wrap_openai(openai_client, guard)
Use Cases
- Secure chatbots against prompt injection
- Prevent sensitive-data leaks in support assistants
- Add policy controls for enterprise AI workflows
- Gate unsafe model outputs before returning to end users
- Centralize AI safety via proxy for multi-language stacks
CLI
aisafe init
aisafe validate aisafe.yaml
aisafe scan "My SSN is 123-45-6789"
aisafe redteam --strict
aisafe proxy --config aisafe.yaml --host 127.0.0.1 --port 8000 \
--upstream-base-url https://api.openai.com \
--upstream-api-key $OPENAI_API_KEY \
--rpm 120
Proxy env vars:
AISAFE_UPSTREAM_BASE_URLAISAFE_UPSTREAM_API_KEY(orOPENAI_API_KEY)
Docker
docker build -t aisafeguard:latest .
docker run --rm -p 8000:8000 \
-e AISAFE_UPSTREAM_API_KEY=$OPENAI_API_KEY \
aisafeguard:latest
Development
npm install
PYTHONPATH=src python -m pytest -v
python benchmarks/bench_pipeline.py
Docs
docs/getting-started.mddocs/config-reference.mddocs/prompt-injection-protection.mddocs/pii-redaction-llm.mddocs/openai-compatible-ai-proxy.md
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aisafeguard-0.1.2.tar.gz.
File metadata
- Download URL: aisafeguard-0.1.2.tar.gz
- Upload date:
- Size: 32.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ae50306ca7fa2500c6397b63c3a74cdee3f1e738016e5dcc83dde0615c25184
|
|
| MD5 |
36d7821fc1edad6ca5e5c3062583eebb
|
|
| BLAKE2b-256 |
1556927647c2a0f029b3afef5003fa9c63bad481fe8b16d866616cca6e58618b
|
Provenance
The following attestation bundles were made for aisafeguard-0.1.2.tar.gz:
Publisher:
release.yml on akshaymagapu/aisafeguard
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aisafeguard-0.1.2.tar.gz -
Subject digest:
5ae50306ca7fa2500c6397b63c3a74cdee3f1e738016e5dcc83dde0615c25184 - Sigstore transparency entry: 963174758
- Sigstore integration time:
-
Permalink:
akshaymagapu/aisafeguard@518b338c1b246f17d24a1e888383f9b04e0cbff7 -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/akshaymagapu
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@518b338c1b246f17d24a1e888383f9b04e0cbff7 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file aisafeguard-0.1.2-py3-none-any.whl.
File metadata
- Download URL: aisafeguard-0.1.2-py3-none-any.whl
- Upload date:
- Size: 38.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
350a2ece073367bd5e3ff6240c0c820fa3e4c44e84b0097929390b7e7a29da69
|
|
| MD5 |
2c0c4fb4122fed4e3a046a76223749c6
|
|
| BLAKE2b-256 |
84c693c92cb868c98e5d49d47362fec67d0cd164fe64d0ab21cc0cccf432c42d
|
Provenance
The following attestation bundles were made for aisafeguard-0.1.2-py3-none-any.whl:
Publisher:
release.yml on akshaymagapu/aisafeguard
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aisafeguard-0.1.2-py3-none-any.whl -
Subject digest:
350a2ece073367bd5e3ff6240c0c820fa3e4c44e84b0097929390b7e7a29da69 - Sigstore transparency entry: 963174762
- Sigstore integration time:
-
Permalink:
akshaymagapu/aisafeguard@518b338c1b246f17d24a1e888383f9b04e0cbff7 -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/akshaymagapu
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@518b338c1b246f17d24a1e888383f9b04e0cbff7 -
Trigger Event:
workflow_dispatch
-
Statement type: