Skip to main content

llama-index observability integration with OpenTelemetry

Project description

LlamaIndex OpenTelemetry Observability Integration

Installation

pip install llama-index-observability-otel

Usage

You can use the default OpenTelemetry observability class as follows:

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry()

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Or you can add some customization to the LlamaIndexOpenTelemetry class by, for example, set a custom span exporter, a custom service name, activating the debugging, set a custom LlamaIndex dispatcher name...

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
    OTLPSpanExporter,
)
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# define a custom span exporter
span_exporter = OTLPSpanExporter("http://0.0.0.0:4318/v1/traces")

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry(
    service_name_or_resource="my.test.service.1",
    span_exporter=span_exporter,
    debug=True,
)

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_observability_otel-0.1.1.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_observability_otel-0.1.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a3f25d0105225d609a198506ecb27c24e420ad28871cd8f2227ee27e55765eda
MD5 dbdb31f374a52dbbf08ae37651a0d725
BLAKE2b-256 10f222c28678fad040c579b9280769ffb79dc5c60d061b8a4b2e0d764787ceb0

See more details on using hashes here.

File details

Details for the file llama_index_observability_otel-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d70ccd207c1ad6f31e6697cd8bca093e170c8589042beef17741fd56cef9f115
MD5 0774e16228ea7fd4ffaae4d1fc97892b
BLAKE2b-256 ab3f14796f0e0f975378e913240041bb3658216432eceb34d9b02eb39c4dd604

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page