Skip to main content

llama-index observability integration with OpenTelemetry

Project description

LlamaIndex OpenTelemetry Observability Integration

Installation

pip install llama-index-observability-otel

Usage

You can use the default OpenTelemetry observability class as follows:

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry()

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Or you can add some customization to the LlamaIndexOpenTelemetry class by, for example, set a custom span exporter, a custom service name, activating the debugging, set a custom LlamaIndex dispatcher name...

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
    OTLPSpanExporter,
)
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# define a custom span exporter
span_exporter = OTLPSpanExporter("http://0.0.0.0:4318/v1/traces")

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry(
    service_name_or_resource="my.test.service.1",
    span_exporter=span_exporter,
    debug=True,
)

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_observability_otel-0.1.0.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_observability_otel-0.1.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.1.0.tar.gz
Algorithm Hash digest
SHA256 faef505885afaf8d1109b7aa3f7d50501cfcfae38ed22b0aff1aad8cb0c07e0e
MD5 ccf4fe8bd84268b504599c487db65b99
BLAKE2b-256 13b87ea8a6ea1670c58ed696c364829b8fcea6db0fa000d38d2671a9242fadbe

See more details on using hashes here.

File details

Details for the file llama_index_observability_otel-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 77cae6b0ef44cc403742d59ce5bc49c0a713f1f58f7586d256028ff5d6524aec
MD5 7a596a75eee7a88d3d41f1291f597434
BLAKE2b-256 a732f02652e21e6599a54b6e987469a56a5d4dea00d34d67bd267b0fa0e54cdb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page