Skip to main content

Composable AI safety pipeline framework with industry compliance packs

Project description

GuardrailGraph

Composable AI safety pipeline framework — define guardrails as a DAG of checks that work across any LLM provider, with industry-specific compliance packs for HIPAA, SOX, GDPR, and FedRAMP.

PyPI npm License: MIT

Why GuardrailGraph?

Every enterprise deploying LLMs needs guardrails. Current options are either provider-locked (Bedrock Guardrails), complex (NeMo Guardrails), or limited (Guardrails AI). GuardrailGraph is the first framework that combines:

  • Composable DAG execution — checks run in parallel for low latency
  • Provider agnostic — works with Bedrock, OpenAI, Anthropic, or any LLM
  • Industry compliance packs — HIPAA, SOX, GDPR out of the box
  • Serverless-native — designed for AWS Lambda from day one
  • Simple API@check decorator + pipeline() builder

Installation

# Python
pip install substrai-guardrailgraph

# npm (TypeScript/JavaScript)
npm install @substrai/guardrailgraph

Quick Start

5-Minute Setup

from guardrailgraph import pipeline, check, Action
from guardrailgraph.checks import pii_check, toxicity_check, injection_check

# Create a pipeline with built-in checks
my_pipeline = pipeline(
    name="my-app",
    checks=[
        pii_check(action=Action.REDACT),
        toxicity_check(threshold=0.7),
        injection_check(),
    ],
    mode="fail-closed",
)

# Run guardrails on any text
result = my_pipeline.run("User input here")

if result.allowed:
    # Safe to forward to LLM
    text = result.modified_text or "User input here"
else:
    # Content blocked
    print(f"Blocked: {result.action.value}")

Custom Checks

from guardrailgraph import check, Action

@check(name="profanity", action=Action.BLOCK, threshold=0.7)
def check_profanity(text: str) -> dict:
    """Custom profanity detection."""
    bad_words = ["badword1", "badword2"]
    found = [w for w in bad_words if w in text.lower()]
    return {
        "detected": len(found) > 0,
        "confidence": min(len(found) / 2.0, 1.0),
        "matched": found,
    }

Industry Compliance Packs

from guardrailgraph import pipeline
from guardrailgraph.packs import hipaa, financial

# HIPAA-compliant healthcare chatbot
healthcare = pipeline(
    name="patient-assistant",
    packs=[hipaa.full()],
)

# SOX-compliant financial advisor
finance = pipeline(
    name="investment-advisor",
    packs=[financial.sox()],
    mode="fail-closed",
)

Middleware Integration

from guardrailgraph.middleware import guardrail

@guardrail(pipeline=my_pipeline)
def call_llm(prompt: str) -> str:
    """Your LLM call — automatically wrapped with guardrails."""
    import boto3
    client = boto3.client("bedrock-runtime")
    # ... invoke model ...
    return response

YAML Configuration

# guardrailgraph.yaml
project:
  name: "my-app-guardrails"
  version: "1.0.0"

pipeline:
  mode: fail-closed
  timeout_ms: 500
  parallel: true

checks:
  - name: pii-detection
    type: builtin/pii
    action: redact
    config:
      entity_types: [SSN, PHONE, EMAIL, CREDIT_CARD]

  - name: toxicity
    type: builtin/toxicity
    action: block
    config:
      threshold: 0.7

  - name: prompt-injection
    type: builtin/injection
    action: block
    config:
      sensitivity: high

CLI

# Scaffold a new project
guardrailgraph init my-project
guardrailgraph init my-project --pack hipaa

# Development
guardrailgraph dev          # Interactive testing
guardrailgraph test         # Run tests
guardrailgraph test --adversarial  # Adversarial suite
guardrailgraph validate     # Validate config

Built-in Checks

Check Description Default Action
pii_check() Detects SSN, phone, email, credit card, IP REDACT
toxicity_check() Scores hate, violence, sexual, self-harm BLOCK
topic_check() Block/allow specific topics BLOCK
injection_check() Prompt injection defense BLOCK
cost_check() Token/cost limits per request BLOCK

Architecture

Input → [Check 1] ──→ [Check 2] ──→ [Check 3]
         (parallel)    (parallel)    (parallel)
              ↓              ↓              ↓
         [PASS/BLOCK/REDACT/FLAG_FOR_REVIEW]
              ↓
         [Final Decision + Audit Log]

Checks execute as a DAG (directed acyclic graph). Independent checks run in parallel for minimum latency. Dependent checks run sequentially.

Integration with LambdaLLM

from lambdallm import handler, Model
from guardrailgraph import pipeline
from guardrailgraph.packs import hipaa

@handler(
    model=Model.CLAUDE_3_SONNET,
    guardrails=pipeline(packs=[hipaa.full()]),
)
def lambda_handler(event, context):
    return context.invoke("Answer: {q}", q=event["body"]["question"])

Comparison

Feature Bedrock Guardrails NeMo Guardrails AI GuardrailGraph
Provider agnostic Partial
Composable DAG
Industry packs
Serverless-native Managed
Custom checks Limited Complex Yes ✅ Simple
Open source ✅ MIT

License

MIT © Gaurav Kumar Sinha

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

substrai_guardrailgraph-0.1.0.tar.gz (33.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

substrai_guardrailgraph-0.1.0-py3-none-any.whl (43.1 kB view details)

Uploaded Python 3

File details

Details for the file substrai_guardrailgraph-0.1.0.tar.gz.

File metadata

  • Download URL: substrai_guardrailgraph-0.1.0.tar.gz
  • Upload date:
  • Size: 33.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for substrai_guardrailgraph-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d14fd94ed0745518ba66eacb9e6c5b04b94b0aba517aeb93186e92ddf985ec75
MD5 7e99898efe414e9048d4124dd2467432
BLAKE2b-256 b120b622132dd2c307adf50b8cb6f0839780a8a6535798c96cb716e7f22947c0

See more details on using hashes here.

File details

Details for the file substrai_guardrailgraph-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for substrai_guardrailgraph-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bfe1b524d2b2ebfa6f8c91da29a7d485ea4164e2194530661b9a37e9a90e5228
MD5 1d4381cd96e36186423372fc0cd03b1d
BLAKE2b-256 18a2a1e62ac2526589f3032d683931339999338454831b026de4f36c3946ca38

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page