Skip to main content

Composable AI safety pipeline framework with industry compliance packs

Project description

GuardrailGraph

Composable AI safety pipeline framework — define guardrails as a DAG of checks that work across any LLM provider, with industry-specific compliance packs for HIPAA, SOX, GDPR, and FedRAMP.

PyPI npm License: MIT

Why GuardrailGraph?

Every enterprise deploying LLMs needs guardrails. Current options are either provider-locked (Bedrock Guardrails), complex (NeMo Guardrails), or limited (Guardrails AI). GuardrailGraph is the first framework that combines:

  • Composable DAG execution — checks run in parallel for low latency
  • Provider agnostic — works with Bedrock, OpenAI, Anthropic, or any LLM
  • Industry compliance packs — HIPAA, SOX, GDPR out of the box
  • Serverless-native — designed for AWS Lambda from day one
  • Simple API@check decorator + pipeline() builder

Installation

# Python
pip install substrai-guardrailgraph

# npm (TypeScript/JavaScript)
npm install @substrai/guardrailgraph

Quick Start

5-Minute Setup

from guardrailgraph import pipeline, check, Action
from guardrailgraph.checks import pii_check, toxicity_check, injection_check

# Create a pipeline with built-in checks
my_pipeline = pipeline(
    name="my-app",
    checks=[
        pii_check(action=Action.REDACT),
        toxicity_check(threshold=0.7),
        injection_check(),
    ],
    mode="fail-closed",
)

# Run guardrails on any text
result = my_pipeline.run("User input here")

if result.allowed:
    # Safe to forward to LLM
    text = result.modified_text or "User input here"
else:
    # Content blocked
    print(f"Blocked: {result.action.value}")

Custom Checks

from guardrailgraph import check, Action

@check(name="profanity", action=Action.BLOCK, threshold=0.7)
def check_profanity(text: str) -> dict:
    """Custom profanity detection."""
    bad_words = ["badword1", "badword2"]
    found = [w for w in bad_words if w in text.lower()]
    return {
        "detected": len(found) > 0,
        "confidence": min(len(found) / 2.0, 1.0),
        "matched": found,
    }

Industry Compliance Packs

from guardrailgraph import pipeline
from guardrailgraph.packs import hipaa, financial

# HIPAA-compliant healthcare chatbot
healthcare = pipeline(
    name="patient-assistant",
    packs=[hipaa.full()],
)

# SOX-compliant financial advisor
finance = pipeline(
    name="investment-advisor",
    packs=[financial.sox()],
    mode="fail-closed",
)

Middleware Integration

from guardrailgraph.middleware import guardrail

@guardrail(pipeline=my_pipeline)
def call_llm(prompt: str) -> str:
    """Your LLM call — automatically wrapped with guardrails."""
    import boto3
    client = boto3.client("bedrock-runtime")
    # ... invoke model ...
    return response

YAML Configuration

# guardrailgraph.yaml
project:
  name: "my-app-guardrails"
  version: "1.0.0"

pipeline:
  mode: fail-closed
  timeout_ms: 500
  parallel: true

checks:
  - name: pii-detection
    type: builtin/pii
    action: redact
    config:
      entity_types: [SSN, PHONE, EMAIL, CREDIT_CARD]

  - name: toxicity
    type: builtin/toxicity
    action: block
    config:
      threshold: 0.7

  - name: prompt-injection
    type: builtin/injection
    action: block
    config:
      sensitivity: high

CLI

# Scaffold a new project
guardrailgraph init my-project
guardrailgraph init my-project --pack hipaa

# Development
guardrailgraph dev          # Interactive testing
guardrailgraph test         # Run tests
guardrailgraph test --adversarial  # Adversarial suite
guardrailgraph validate     # Validate config

Built-in Checks

Check Description Default Action
pii_check() Detects SSN, phone, email, credit card, IP REDACT
toxicity_check() Scores hate, violence, sexual, self-harm BLOCK
topic_check() Block/allow specific topics BLOCK
injection_check() Prompt injection defense BLOCK
cost_check() Token/cost limits per request BLOCK

Architecture

Input → [Check 1] ──→ [Check 2] ──→ [Check 3]
         (parallel)    (parallel)    (parallel)
              ↓              ↓              ↓
         [PASS/BLOCK/REDACT/FLAG_FOR_REVIEW]
              ↓
         [Final Decision + Audit Log]

Checks execute as a DAG (directed acyclic graph). Independent checks run in parallel for minimum latency. Dependent checks run sequentially.

Integration with LambdaLLM

from lambdallm import handler, Model
from guardrailgraph import pipeline
from guardrailgraph.packs import hipaa

@handler(
    model=Model.CLAUDE_3_SONNET,
    guardrails=pipeline(packs=[hipaa.full()]),
)
def lambda_handler(event, context):
    return context.invoke("Answer: {q}", q=event["body"]["question"])

Comparison

Feature Bedrock Guardrails NeMo Guardrails AI GuardrailGraph
Provider agnostic Partial
Composable DAG
Industry packs
Serverless-native Managed
Custom checks Limited Complex Yes ✅ Simple
Open source ✅ MIT

License

MIT © Gaurav Kumar Sinha

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

substrai_guardrailgraph-0.3.0.tar.gz (71.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

substrai_guardrailgraph-0.3.0-py3-none-any.whl (76.5 kB view details)

Uploaded Python 3

File details

Details for the file substrai_guardrailgraph-0.3.0.tar.gz.

File metadata

  • Download URL: substrai_guardrailgraph-0.3.0.tar.gz
  • Upload date:
  • Size: 71.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for substrai_guardrailgraph-0.3.0.tar.gz
Algorithm Hash digest
SHA256 2b4e58d72e207e293cf2ca5c235d838853727cc87c32467f1dc1a281ad786340
MD5 74bd90334ae9d48f5412a21de82cc099
BLAKE2b-256 6ad77b882282b779ac0251159cb0001bd4cb0f3cc9c3ea7d050d86bdc016c409

See more details on using hashes here.

File details

Details for the file substrai_guardrailgraph-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for substrai_guardrailgraph-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5f18a23f1d222e1763b2ef1c588ed9ed8e1bf73a848fc73f1e5f5eec44d965f1
MD5 f88a1ddbf57771ae76384766623055be
BLAKE2b-256 18cd84f73b5e5c2c7f29e441aade9eca0341401cc5638b110d4bfcd39a1510c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page