Skip to main content

OpenInference LangChain Instrumentation

Reason this release was yanked:

Rename of common instrumentation config classes

Project description

OpenInference LangChain Instrumentation

Python auto-instrumentation library for LangChain.

These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix.

pypi

Installation

pip install openinference-instrumentation-langchain

Quickstart

Install packages needed for this demonstration.

pip install openinference-instrumentation-langchain langchain arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp

Start the Phoenix app in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address.

The Phoenix app does not send data over the internet. It only operates locally on your machine.

python -m phoenix.server.main serve

The following Python code sets up the LangChainInstrumentor to trace langchain and send the traces to Phoenix at the endpoint shown below.

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation.langchain import LangChainInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
trace_api.set_tracer_provider(tracer_provider)
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))

LangChainInstrumentor().instrument()

To demonstrate langchain tracing, we'll make a simple chain to tell a joke. First, configure your OpenAI credentials.

import os

os.environ["OPENAI_API_KEY"] = "<your openai key>"

Now we can create a chain and run it.

prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})
completion = llm.predict(adjective="funny", metadata={"variant": "funny"})
print(completion)

Visit the Phoenix app at http://localhost:6006 to see the traces.

More Info

More details about tracing with OpenInference and Phoenix can be found in the Phoenix documentation.

For AI/ML observability solutions in production, including a cloud-based trace collector, visit Arize.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file openinference_instrumentation_langchain-0.1.25.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_langchain-0.1.25.tar.gz
Algorithm Hash digest
SHA256 84d0b425eb8a7e7f180de011c1c0f9693870441d5385b2ea065d6bccd06141f0
MD5 86f5e5ea62f2872c69ab8f8bd69dd3a3
BLAKE2b-256 19d70f63a34d60740f3b80aa140d03995d86db270bc1df4b8c940a83ec4b3360

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_langchain-0.1.25-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_langchain-0.1.25-py3-none-any.whl
Algorithm Hash digest
SHA256 23f1c9ace03d7215064d9e6a57569fbbb7df4fa9f4508b6439fa53b22db5830e
MD5 792a366d0221cef358b61ddd12e840d8
BLAKE2b-256 720f832a8b08ba89419f8b8d22d0e47ecae3bcc4c0758f60566a966d2102be11

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page