Skip to main content

Adding guardrails to large language models.

Project description

Guardrails AI Logo Guardrails AI Logo

License PyPI - Python Version Downloads CI codecov Checked with pyright X (formerly Twitter) Follow Discord Static Badge Static Badge

What is Guardrails?

Guardrails is a Python framework that helps build reliable AI applications by performing two key functions:

  1. Guardrails runs Input/Output Guards in your application that detect, quantify and mitigate the presence of specific types of risks. To look at the full suite of risks, check out Guardrails Hub.
  2. Guardrails help you generate structured data from LLMs.
Guardrails in your application

Guardrails Hub

Guardrails Hub is a collection of pre-built measures of specific types of risks (called 'validators'). Multiple validators can be combined together into Input and Output Guards that intercept the inputs and outputs of LLMs. Visit Guardrails Hub to see the full list of validators and their documentation.

Guardrails Hub gif

Installation

pip install guardrails-ai

Getting Started

Create Input and Output Guards for LLM Validation

  1. Download and configure the Guardrails Hub CLI.

    pip install guardrails-ai
    guardrails configure
    
  2. Install a guardrail from Guardrails Hub.

    guardrails hub install hub://guardrails/regex_match
    
  3. Create a Guard from the installed guardrail.

    from guardrails import Guard, OnFailAction
    from guardrails.hub import RegexMatch
    
    guard = Guard().use(
        RegexMatch, regex="\(?\d{3}\)?-? *\d{3}-? *-?\d{4}", on_fail=OnFailAction.EXCEPTION
    )
    
    guard.validate("123-456-7890")  # Guardrail passes
    
    try:
        guard.validate("1234-789-0000")  # Guardrail fails
    except Exception as e:
        print(e)
    

    Output:

    Validation failed for field with errors: Result must match \(?\d{3}\)?-? *\d{3}-? *-?\d{4}
    
  4. Run multiple guardrails within a Guard. First, install the necessary guardrails from Guardrails Hub.

    guardrails hub install hub://guardrails/competitor_check
    guardrails hub install hub://guardrails/toxic_language
    

    Then, create a Guard from the installed guardrails.

    from guardrails import Guard, OnFailAction
    from guardrails.hub import CompetitorCheck, ToxicLanguage
    
    guard = Guard().use_many(
        CompetitorCheck(["Apple", "Microsoft", "Google"], on_fail=OnFailAction.EXCEPTION),
        ToxicLanguage(threshold=0.5, validation_method="sentence", on_fail=OnFailAction.EXCEPTION)
    )
    
    guard.validate(
        """An apple a day keeps a doctor away.
        This is good advice for keeping your health."""
    )  # Both the guardrails pass
    
    try:
        guard.validate(
            """Shut the hell up! Apple just released a new iPhone."""
        )  # Both the guardrails fail
    except Exception as e:
        print(e)
    

    Output:

    Validation failed for field with errors: Found the following competitors: [['Apple']]. Please avoid naming those competitors next time, The following sentences in your response were found to be toxic:
    
    - Shut the hell up!
    

Use Guardrails to generate structured data from LLMs

Let's go through an example where we ask an LLM to generate fake pet names. To do this, we'll create a Pydantic BaseModel that represents the structure of the output we want.

from pydantic import BaseModel, Field

class Pet(BaseModel):
    pet_type: str = Field(description="Species of pet")
    name: str = Field(description="a unique pet name")

Now, create a Guard from the Pet class. The Guard can be used to call the LLM in a manner so that the output is formatted to the Pet class. Under the hood, this is done by either of two methods:

  1. Function calling: For LLMs that support function calling, we generate structured data using the function call syntax.
  2. Prompt optimization: For LLMs that don't support function calling, we add the schema of the expected output to the prompt so that the LLM can generate structured data.
from guardrails import Guard
import openai

prompt = """
    What kind of pet should I get and what should I name it?

    ${gr.complete_json_suffix_v2}
"""
guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)

raw_output, validated_output, *rest = guard(
    llm_api=openai.completions.create,
    engine="gpt-3.5-turbo-instruct"
)

print(validated_output)

This prints:

{
    "pet_type": "dog",
    "name": "Buddy
}

FAQ

I'm running into issues with Guardrails. Where can I get help?

You can reach out to us on Discord or Twitter.

Can I use Guardrails with any LLM?

Yes, Guardrails can be used with proprietary and open-source LLMs. Check out this guide on how to use Guardrails with any LLM.

Can I create my own validators?

Yes, you can create your own validators and contribute them to Guardrails Hub. Check out this guide on how to create your own validators.

Does Guardrails support other languages?

Guardrails can be used with Python and JavaScript. Check out the docs on how to use Guardrails from JavaScript. We are working on adding support for other languages. If you would like to contribute to Guardrails, please reach out to us on Discord or Twitter.

Contributing

We welcome contributions to Guardrails!

Get started by checking out Github issues and check out the Contributing Guide. Feel free to open an issue, or reach out if you would like to add to the project!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

guardrails_ai-0.5.1.tar.gz (148.0 kB view details)

Uploaded Source

Built Distribution

guardrails_ai-0.5.1-py3-none-any.whl (194.0 kB view details)

Uploaded Python 3

File details

Details for the file guardrails_ai-0.5.1.tar.gz.

File metadata

  • Download URL: guardrails_ai-0.5.1.tar.gz
  • Upload date:
  • Size: 148.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Linux/6.5.0-1024-azure

File hashes

Hashes for guardrails_ai-0.5.1.tar.gz
Algorithm Hash digest
SHA256 7e2ae677548215eaf73c883500d41fa6bd07534bfa5c44bcea48ebee27fad294
MD5 4245090b424b940c73d90e29a1e83474
BLAKE2b-256 2d12bf04c4c1e840a487388ddbe45d34cecab09d5c6b7f86d0582589512105c1

See more details on using hashes here.

File details

Details for the file guardrails_ai-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: guardrails_ai-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 194.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Linux/6.5.0-1024-azure

File hashes

Hashes for guardrails_ai-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ef918f107c7988271b5425cc74110ab924a9a69f6d9e4b3a107b31c1a5864228
MD5 f1dcfc0ad219f6972a03939bc39da6c4
BLAKE2b-256 b29d90b3c612aec528cf03e13eb662f5a2aa4469c7a0692c7b61597839990ac5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page