OpenInference OpenAI Instrumentation
Project description
OpenInference OpenAI Instrumentation
Python auto-instrumentation library for OpenAI's python SDK.
The traces emitted by this instrumentation are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix
Installation
pip install openinference-instrumentation-openai
Quickstart
In this example we will instrument a small program that uses OpenAI and observe the traces via arize-phoenix
.
Install packages.
pip install openinference-instrumentation-openai "openai>=1.26" arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp
Start the phoenix server so that it is ready to collect traces. The Phoenix server runs entirely on your machine and does not send data over the internet.
python -m phoenix.server.main serve
In a python file, setup the OpenAIInstrumentor
and configure the tracer to send traces to Phoenix.
import openai
from openinference.instrumentation.openai import OpenAIInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
# Optionally, you can also print the spans to the console.
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
if __name__ == "__main__":
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
stream=True,
stream_options={"include_usage": True},
)
for chunk in response:
if chunk.choices and (content := chunk.choices[0].delta.content):
print(content, end="")
Since we are using OpenAI, we must set the OPENAI_API_KEY
environment variable to authenticate with the OpenAI API.
export OPENAI_API_KEY=your-api-key
Now simply run the python file and observe the traces in Phoenix.
python your_file.py
FAQ
Q: How to get token counts when streaming?
A: To get token counts when streaming, install openai>=1.26
and set stream_options={"include_usage": True}
when calling create
. See the example shown above. For more info, see here.
More Info
Fore details about tracing with OpenInference and Phoenix, consult the Phoenix documentation.
For AI/ML observability solutions in production, check out the docs on Arize.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for openinference_instrumentation_openai-0.1.7.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6a9f9ea0f76a82f8b4f9fce8d5f2a18e99d0849cb1ccf328da0973c683298823 |
|
MD5 | 3c5dbf0ab7b9b049a405dd7159b36329 |
|
BLAKE2b-256 | 2020f30417a93feb763ce4a8c57575b449e7b63f4bbcb9ab5ac664f9ce54d953 |
Hashes for openinference_instrumentation_openai-0.1.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 93955bc3701aff504af63f4fb414286e29a60d8b8cb0c26c727e33ee7b88875c |
|
MD5 | cf6be5f38e153f40b53fc0640fef89b2 |
|
BLAKE2b-256 | 524ceccdfc572da6bd08a56e50996773f81c50fd0058327369d3d5aec9c2d897 |