OpenInference LlamaIndex Instrumentation
Project description
OpenInference LlamaIndex Instrumentation
Python auto-instrumentation library for LlamaIndex.
These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix
.
Installation
pip install openinference-instrumentation-llama-index
Compatibility
llama-index version | openinference-instrumentation-llama-index version |
---|---|
>=0.11.0 | >=3.0 |
>=0.10.43 | >=2.0, <3.0 |
>=0.10.0, <0.10.43 | >=1.0, <0.2 |
>=0.9.14, <0.10.0 | 0.1.3 |
Quickstart
Install packages needed for this demonstration.
python -m pip install --upgrade \
openinference-instrumentation-llama-index \
opentelemetry-sdk \
opentelemetry-exporter-otlp \
"opentelemetry-proto>=1.12.0" \
arize-phoenix
Start the Phoenix app in the background as a collector. By default, it listens on http://localhost:6006
. You can visit the app via a browser at the same address.
The Phoenix app does not send data over the internet. It only operates locally on your machine.
python -m phoenix.server.main serve
The following Python code sets up the LlamaIndexInstrumentor
to trace llama-index
and send the traces to Phoenix at the endpoint shown below.
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
To demonstrate tracing, we'll use LlamaIndex below to query a document.
First, download a text file.
import tempfile
from urllib.request import urlretrieve
from llama_index.core import SimpleDirectoryReader
url = "https://raw.githubusercontent.com/Arize-ai/phoenix-assets/main/data/paul_graham/paul_graham_essay.txt"
with tempfile.NamedTemporaryFile() as tf:
urlretrieve(url, tf.name)
documents = SimpleDirectoryReader(input_files=[tf.name]).load_data()
Next, we'll query using OpenAI. To do that you need to set up your OpenAI API key in an environment variable.
import os
os.environ["OPENAI_API_KEY"] = "<your openai key>"
Now we can query the indexed documents.
from llama_index.core import VectorStoreIndex
query_engine = VectorStoreIndex.from_documents(documents).as_query_engine()
print(query_engine.query("What did the author do growing up?"))
Visit the Phoenix app at http://localhost:6006
to see the traces.
More Info
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file openinference_instrumentation_llama_index-3.0.4.tar.gz
.
File metadata
- Download URL: openinference_instrumentation_llama_index-3.0.4.tar.gz
- Upload date:
- Size: 50.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4a9007affcdf4518c418d34521a4f166af9b0e9dc34f7902dcaa6afb35009eb4 |
|
MD5 | 991fb7583b1dc96dec2aed1ba84cffe2 |
|
BLAKE2b-256 | d4ce7bbb73e9a863b8989e729324a41395124165b552d7328ff143011d2618da |
File details
Details for the file openinference_instrumentation_llama_index-3.0.4-py3-none-any.whl
.
File metadata
- Download URL: openinference_instrumentation_llama_index-3.0.4-py3-none-any.whl
- Upload date:
- Size: 25.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8e6d69fd4952a06d0ef4b6e852892dca3c37e97554666ebd3c0697defebb22f7 |
|
MD5 | 418f4a8f27fb72755701ebf8cafbae98 |
|
BLAKE2b-256 | 9115f3ae981ca2c0daddc12b460ea98551309ef7061383818fea2982b5ffecef |