Skip to main content

Safety rails for every AI app — model-agnostic guardrails for LLM applications

Project description

AISafe Guard

Safety rails for every AI app. aisafeguard is a model-agnostic Python toolkit that scans LLM input/output for prompt injection, jailbreaks, PII, toxicity, malicious URLs, and relevance issues.

Install

pip install aisafeguard

From source (fresh clone):

pip install .

Optional extras:

pip install "aisafeguard[ml]"
pip install "aisafeguard[proxy]"
pip install "aisafeguard[integrations]"
pip install "aisafeguard[telemetry]"

Repository Setup

git clone <your-repo-url>
cd aisafeguard
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e .[dev]
npm install

Node.js Users

This repo does not currently ship a native Node SDK package for runtime usage.

Recommended Node integration today:

  1. Run aisafe proxy (or Docker) from this repo.
  2. Call the OpenAI-compatible endpoint from your Node app.
  3. Keep safety policy/config centralized in aisafe.yaml.

Example (Node fetch):

const res = await fetch("http://localhost:8000/v1/chat/completions", {
  method: "POST",
  headers: { "Content-Type": "application/json", "x-user-id": "user-123" },
  body: JSON.stringify({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Hello" }]
  })
});
const data = await res.json();
console.log(data.choices?.[0]?.message?.content);

Quick Start

Decorator:

from aisafeguard import guard

@guard(input=["prompt_injection", "pii"], output=["toxicity", "pii"])
async def ask(prompt: str) -> str:
    return "model output"

Guard object:

from aisafeguard import Guard

g = Guard(config="aisafe.yaml")
input_result = await g.scan_input("Ignore previous instructions")

OpenAI wrapper:

from aisafeguard import Guard
from aisafeguard.integrations import wrap_openai

guard = Guard()
client = wrap_openai(openai_client, guard)

CLI

aisafe init
aisafe validate aisafe.yaml
aisafe scan "My SSN is 123-45-6789"
aisafe redteam --strict
aisafe proxy --config aisafe.yaml --host 127.0.0.1 --port 8000 \
  --upstream-base-url https://api.openai.com \
  --upstream-api-key $OPENAI_API_KEY \
  --rpm 120

Proxy env vars:

  • AISAFE_UPSTREAM_BASE_URL
  • AISAFE_UPSTREAM_API_KEY (or OPENAI_API_KEY)

Docker

docker build -t aisafeguard:latest .
docker run --rm -p 8000:8000 \
  -e AISAFE_UPSTREAM_API_KEY=$OPENAI_API_KEY \
  aisafeguard:latest

Development

npm install
PYTHONPATH=src python -m pytest -v
python benchmarks/bench_pipeline.py

Release and Publishing

  • Python package publish is automated by .github/workflows/release.yml.
  • Trigger: push a tag like v0.1.0.
  • Requirement: set PYPI_API_TOKEN in repo secrets.
  • npm is currently used only for repo tooling (commitlint) via package.json and is marked private, so it is not published to npm.

Get PYPI_API_TOKEN

  1. Create or log in to PyPI.
  2. Go to Account Settings -> API tokens -> Add API token.
  3. Create a token scoped to the project (recommended) or account.
  4. In GitHub repo settings: Settings -> Secrets and variables -> Actions -> New repository secret.
  5. Name it PYPI_API_TOKEN and paste the token value.

Node Package Support

This repo currently supports Node apps through the proxy API (HTTP), not a native npm runtime SDK.

If you want a real npm package, typical work is:

  1. Build a JS/TS client SDK (@aisafeguard/sdk) for scan/proxy calls.
  2. Add package build pipeline (tsup/rollup) and type exports.
  3. Add npm publish workflow + NPM_TOKEN secret.
  4. Maintain semver/versioning across Python + Node releases.

Estimated effort:

  • Basic SDK wrapper: 1-2 days.
  • Production-grade SDK + docs/tests/release automation: 3-7 days.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aisafeguard-0.1.0.tar.gz (32.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aisafeguard-0.1.0-py3-none-any.whl (38.3 kB view details)

Uploaded Python 3

File details

Details for the file aisafeguard-0.1.0.tar.gz.

File metadata

  • Download URL: aisafeguard-0.1.0.tar.gz
  • Upload date:
  • Size: 32.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aisafeguard-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c47fe6843f2bf6991eda5ca5d614336acac30a4684637ea654d6411af40c1c97
MD5 55ae044590148716a206f7c5aab1e1c4
BLAKE2b-256 098c392390b177ed2b1a9fbbaf15b07f95b4677160d90e640504f8d9f7326a21

See more details on using hashes here.

Provenance

The following attestation bundles were made for aisafeguard-0.1.0.tar.gz:

Publisher: release.yml on akshaymagapu/aisafeguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aisafeguard-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: aisafeguard-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 38.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aisafeguard-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4b14055587d0b1544df6e517b2ff7251563d12246a94c387cfa1dfa5aadedb40
MD5 08760532034db3605eaea66894e2e9a9
BLAKE2b-256 b53eaf8af372ac8f37b8bfc32dcec75cc4fe6bbed30efcf56dd6f1a4cc7df9f1

See more details on using hashes here.

Provenance

The following attestation bundles were made for aisafeguard-0.1.0-py3-none-any.whl:

Publisher: release.yml on akshaymagapu/aisafeguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page