Skip to main content

Safety rails for every AI app — model-agnostic guardrails for LLM applications

Project description

AISafe Guard

Safety rails for every AI app. aisafeguard is a model-agnostic Python toolkit that scans LLM input/output for prompt injection, jailbreaks, PII, toxicity, malicious URLs, and relevance issues.

Install

pip install aisafeguard

From source (fresh clone):

pip install .

Optional extras:

pip install "aisafeguard[ml]"
pip install "aisafeguard[proxy]"
pip install "aisafeguard[integrations]"
pip install "aisafeguard[telemetry]"

Repository Setup

git clone <your-repo-url>
cd aisafeguard
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e .[dev]
npm install

Node.js Users

This repo does not currently ship a native Node SDK package for runtime usage.

Recommended Node integration today:

  1. Run aisafe proxy (or Docker) from this repo.
  2. Call the OpenAI-compatible endpoint from your Node app.
  3. Keep safety policy/config centralized in aisafe.yaml.

Example (Node fetch):

const res = await fetch("http://localhost:8000/v1/chat/completions", {
  method: "POST",
  headers: { "Content-Type": "application/json", "x-user-id": "user-123" },
  body: JSON.stringify({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Hello" }]
  })
});
const data = await res.json();
console.log(data.choices?.[0]?.message?.content);

Quick Start

Decorator:

from aisafeguard import guard

@guard(input=["prompt_injection", "pii"], output=["toxicity", "pii"])
async def ask(prompt: str) -> str:
    return "model output"

Guard object:

from aisafeguard import Guard

g = Guard(config="aisafe.yaml")
input_result = await g.scan_input("Ignore previous instructions")

OpenAI wrapper:

from aisafeguard import Guard
from aisafeguard.integrations import wrap_openai

guard = Guard()
client = wrap_openai(openai_client, guard)

CLI

aisafe init
aisafe validate aisafe.yaml
aisafe scan "My SSN is 123-45-6789"
aisafe redteam --strict
aisafe proxy --config aisafe.yaml --host 127.0.0.1 --port 8000 \
  --upstream-base-url https://api.openai.com \
  --upstream-api-key $OPENAI_API_KEY \
  --rpm 120

Proxy env vars:

  • AISAFE_UPSTREAM_BASE_URL
  • AISAFE_UPSTREAM_API_KEY (or OPENAI_API_KEY)

Docker

docker build -t aisafeguard:latest .
docker run --rm -p 8000:8000 \
  -e AISAFE_UPSTREAM_API_KEY=$OPENAI_API_KEY \
  aisafeguard:latest

Development

npm install
PYTHONPATH=src python -m pytest -v
python benchmarks/bench_pipeline.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aisafeguard-0.1.1.tar.gz (31.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aisafeguard-0.1.1-py3-none-any.whl (37.7 kB view details)

Uploaded Python 3

File details

Details for the file aisafeguard-0.1.1.tar.gz.

File metadata

  • Download URL: aisafeguard-0.1.1.tar.gz
  • Upload date:
  • Size: 31.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aisafeguard-0.1.1.tar.gz
Algorithm Hash digest
SHA256 f25fe7512efdb3e5056f4f947269ebc56bac591208115cc4f68ab1794f787da3
MD5 324fa994e2ab0e71d4b65788762e8566
BLAKE2b-256 e7c9717d37b96bf8cb00b49db7ac58628fab103d5e74230b90444b4b52bf57e8

See more details on using hashes here.

Provenance

The following attestation bundles were made for aisafeguard-0.1.1.tar.gz:

Publisher: release.yml on akshaymagapu/aisafeguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aisafeguard-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: aisafeguard-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 37.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aisafeguard-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 822569d403e906b81e026a93465ea636073b7747793c4a977a53c12b34e1b2c3
MD5 59a0fc0cb6264ca4b41711c5191ed828
BLAKE2b-256 3ed17b75cc49abd854c3775b7a57bd6150bd3a55fe72b9f696af98d0f518d188

See more details on using hashes here.

Provenance

The following attestation bundles were made for aisafeguard-0.1.1-py3-none-any.whl:

Publisher: release.yml on akshaymagapu/aisafeguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page