Skip to main content

Adding guardrails to large language models.

Project description

🛤️ Guardrails

Discord Twitter

Guardrails is an open-source Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs).

Docs

Note: Guardrails is an alpha release, so expect sharp edges and bugs.

🧩 What is Guardrails?

Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs). Guardrails:

✅ does pydantic-style validation of LLM outputs,
✅ takes corrective actions (e.g. reasking LLM) when validation fails,
✅ enforces structure and type guarantees (e.g. JSON).

🚒 Under the hood

Guardrails provides a format (.rail) for enforcing a specification on an LLM output, and a lightweight wrapper around LLM API calls to implement this spec.

  1. rail (Reliable AI markup Language) files for specifying structure and type information, validators and corrective actions over LLM outputs.
  2. gd.Guard wraps around LLM API calls to structure, validate and correct the outputs.
graph LR
    A[Create `RAIL` spec] --> B["Initialize `guard` from spec"];
    B --> C["Wrap LLM API call with `guard`"];

Check out the Getting Started guide to learn how to use Guardrails.

📜 RAIL spec

At the heart of Guardrails is the rail spec. rail is intended to be a language-agnostic, human-readable format for specifying structure and type information, validators and corrective actions over LLM outputs.

rail is a flavor of XML that lets users specify:

  1. the expected structure and types of the LLM output (e.g. JSON)
  2. the quality criteria for the output to be considered valid (e.g. generated text should be bias-free, generated code should be bug-free)
  3. and corrective actions to be taken if the output is invalid (e.g. reask the LLM, filter out the invalid output, etc.)

To learn more about the RAIL spec and the design decisions behind it, check out the docs. To learn how to write your own RAIL spec, check out this link.

📦 Installation

pip install guardrails-ai

📍 Roadmap

  • Adding more examples, new use cases and domains
  • Adding integrations with langchain, gpt-index, minichain, manifest
  • Expanding validators offering
  • More compilers from .rail -> LLM prompt (e.g. .rail -> TypeScript)
  • Informative logging
  • Improving reasking logic
  • A guardrails.js implementation
  • VSCode extension for .rail files
  • Next version of .rail format
  • Add more LLM providers

🚀 Getting Started

Let's go through an example where we ask an LLM to explain what a "bank run" is in a tweet, and generate URL links to relevant news articles. We'll generate a .rail spec for this and then use Guardrails to enforce it. You can see more examples in the docs.

📝 Creating a RAIL spec

We create a RAIL spec to describe the expected structure and types of the LLM output, the quality criteria for the output to be considered valid, and corrective actions to be taken if the output is invalid.

Specifically, we use RAIL to

  • Request the LLM to generate an object with two fields: explanation and follow_up_url.
  • For the explanation field, ensure the max length of the generated string should be between 200 and 280 characters.
    • If the explanation is not of valid length, reask the LLM.
  • For the follow_up_url field, the URL should be reachable.
    • If the URL is not reachable, we will filter it out of the response.
<rail version="0.1">
<output>
    <object name="bank_run" format="length: 2">
        <string
            name="explanation"
            description="A paragraph about what a bank run is."
            format="length: 200 280"
            on-fail-length="reask"
        />
        <url
            name="follow_up_url"
            description="A web URL where I can read more about bank runs."
            format="valid-url"
            on-fail-valid-url="filter"
        />
    </object>
</output>

<prompt>
Explain what a bank run is in a tweet.

@xml_prefix_prompt

{output_schema}

@json_suffix_prompt_v2_wo_none
</prompt>
</rail>

We specify our quality criteria (generated length, URL reachability) in the format fields of the RAIL spec below. We reask if explanation is not valid, and filter the follow_up_url if it is not valid.

🛠️ Using Guardrails to enforce the RAIL spec

Next, we'll use the RAIL spec to create a Guard object. The Guard object will wrap the LLM API call and enforce the RAIL spec on its output.

import guardrails as gd

guard = gd.Guard.from_rail(f.name)

The Guard object compiles the RAIL specification and adds it to the prompt. (Right now this is a passthrough operation, more compilers are planned to find the best way to express the spec in a prompt.)

Here's what the prompt looks like after the RAIL spec is compiled and added to it.

Explain what a bank run is in a tweet.

Given below is XML that describes the information to extract from this document and the tags to extract it into.

<output>
    <object name="bank_run" format="length: 2">
        <string name="explanation" description="A paragraph about what a bank run is." format="length: 200 280" on-fail-length="reask" />
        <url name="follow_up_url" description="A web URL where I can read more about bank runs." required="true" format="valid-url" on-fail-valid-url="filter" />
    </object>
</output>

ONLY return a valid JSON object (no other text is necessary). The JSON MUST conform to the XML format, including any types and format requests e.g. requests for lists, objects and specific types. Be correct and concise.

JSON Output:

Call the Guard object with the LLM API call as the first argument and add any additional arguments to the LLM API call as the remaining arguments.

import openai

# Wrap the OpenAI API call with the `guard` object
raw_llm_output, validated_output = guard(
    openai.Completion.create,
    engine="text-davinci-003",
    max_tokens=1024,
    temperature=0.3
)

print(validated_output)
{
    'bank_run': {
        'explanation': 'A bank run is when a large number of people withdraw their deposits from a bank due to concerns about its solvency. This can cause a financial crisis if the bank is unable to meet the demand for withdrawals.',
        'follow_up_url': 'https://www.investopedia.com/terms/b/bankrun.asp'
    }
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

guardrails-ai-0.1.3.tar.gz (28.6 kB view details)

Uploaded Source

Built Distribution

guardrails_ai-0.1.3-py2.py3-none-any.whl (28.9 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file guardrails-ai-0.1.3.tar.gz.

File metadata

  • Download URL: guardrails-ai-0.1.3.tar.gz
  • Upload date:
  • Size: 28.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for guardrails-ai-0.1.3.tar.gz
Algorithm Hash digest
SHA256 bb9d870fee55655effc2034ebc008969c5e91b4115a832b24856a55ada3d8c11
MD5 5e15e6cfa7acb58b3658ce5d74a1dc0b
BLAKE2b-256 db013984d53163ac5d1a84ea40e5e3b5d1073f850fab4eabc7303b394e759cc8

See more details on using hashes here.

File details

Details for the file guardrails_ai-0.1.3-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for guardrails_ai-0.1.3-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 1bd7c6889b9bb7c966e773608d5138b860efa2e19db1352fcb2e22473a574a86
MD5 c0d391a5d66bb11b9f9f6a5e316ffb1d
BLAKE2b-256 fc537088bfde12b188ca60f44048385256c2011ef29b243366ac8c718ea67df5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page