Skip to main content

OpenInference LangChain Instrumentation

Project description

OpenInference LangChain Instrumentation

Python auto-instrumentation library for LangChain.

These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix.

pypi

Installation

pip install openinference-instrumentation-langchain

Quickstart

Install packages needed for this demonstration.

pip install openinference-instrumentation-langchain langchain arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp

Start the Phoenix app in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address.

The Phoenix app does not send data over the internet. It only operates locally on your machine.

python -m phoenix.server.main serve

The following Python code sets up the LangChainInstrumentor to trace langchain and send the traces to Phoenix at the endpoint shown below.

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation.langchain import LangChainInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
trace_api.set_tracer_provider(tracer_provider)
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))

LangChainInstrumentor().instrument()

To demonstrate langchain tracing, we'll make a simple chain to tell a joke. First, configure your OpenAI credentials.

import os

os.environ["OPENAI_API_KEY"] = "<your openai key>"

Now we can create a chain and run it.

prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})
completion = llm.predict(adjective="funny", metadata={"variant": "funny"})
print(completion)

Visit the Phoenix app at http://localhost:6006 to see the traces.

More Info

More details about tracing with OpenInference and Phoenix can be found in the Phoenix documentation.

For AI/ML observability solutions in production, including a cloud-based trace collector, visit Arize.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file openinference_instrumentation_langchain-0.1.19.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_langchain-0.1.19.tar.gz
Algorithm Hash digest
SHA256 a7a3136ac5eac30b056bb9f5061c0761085a3f18d2848a88e0620a245aab1d68
MD5 f9a5ca90d8e3fc4b7e7bbc501ccb1f9c
BLAKE2b-256 cccf386a03e4d580f1eccbe2b2705d10ddce4a7f2d0d10fd50c6d27008768512

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_langchain-0.1.19-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_langchain-0.1.19-py3-none-any.whl
Algorithm Hash digest
SHA256 c451371dba4b0345ab18ae17792fa542c19916d7ff1b68c73846a12143fc2143
MD5 e26560e1945258260429f4c14f074dbb
BLAKE2b-256 e02adcd750cb532b8cac5ef491a42ad38fcff9e685d5c6d873faaaffdc0b2d1f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page