Skip to main content

OpenInference Guardrails Instrumentation

Project description

OpenInference guardrails Instrumentation

pypi

Python auto-instrumentation library for LLM applications implemented with Guardrails

Guards are fully OpenTelemetry-compatible and can be sent to an OpenTelemetry collector for monitoring, such as arize-phoenix.

Installation

pip install openinference-instrumentation-guardrails

Quickstart

This quickstart shows you how to instrument your guardrailed LLM application

Install required packages.

pip install openinference-instrumentation-guardrails guardrails-ai arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp

Start Phoenix in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address. (Phoenix does not send data over the internet. It only operates locally on your machine.)

python -m phoenix.server.main serve

Install the TwoWords validator that's used in the Guard.

guardrails hub install hub://guardrails/two_words

Set up GuardrailsInstrumentor to trace your guardrails application and sends the traces to Phoenix at the endpoint defined below.

from openinference.instrumentation.guardrails import GuardrailsInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
import os

os.environ["OPENAI_API_KEY"] = "YOUR_KEY_HERE"

endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
trace_api.set_tracer_provider(tracer_provider)
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

GuardrailsInstrumentor().instrument()

Set up a simple example of LLM call using a Guard

from guardrails import Guard
from guardrails.hub import TwoWords
import openai

guard = Guard().use(
    TwoWords(),
)

response = guard(
    llm_api=openai.chat.completions.create,
    prompt="What is another name for America?",
    model="gpt-3.5-turbo",
    max_tokens=1024,
)

print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file openinference_instrumentation_guardrails-0.1.1.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_guardrails-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d3f792b9d095b038b684f1d5b7e8d661bb1e124d228504846d7836a0b5fd156f
MD5 2c228528dfd42d7afb0c87bc4f22cf3a
BLAKE2b-256 67d47d1318832daee3ca5bd781481c265937f9c9b3555ece98475f78dd1a56a2

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_guardrails-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_guardrails-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3f784554eee6d9a43535b77652acc8de4ff532b5709ccae9f635a19cb4ec1a83
MD5 3c50e04ac3abb28bbbd5378d893ac20a
BLAKE2b-256 c793183cd0e854af43630872905ee2fbe90f4ea456c4eb6fcafd07bdc3dd7645

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page