Skip to main content

llama-index observability integration with OpenTelemetry

Project description

LlamaIndex OpenTelemetry Observability Integration

Installation

pip install llama-index-observability-otel

Usage

You can use the default OpenTelemetry observability class as follows:

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry()

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Or you can add some customization to the LlamaIndexOpenTelemetry class by, for example, set a custom span exporter, a custom service name, activating the debugging, set a custom LlamaIndex dispatcher name...

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
    OTLPSpanExporter,
)
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# define a custom span exporter
span_exporter = OTLPSpanExporter("http://0.0.0.0:4318/v1/traces")

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry(
    service_name_or_resource="my.test.service.1",
    span_exporter=span_exporter,
    debug=True,
)

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_observability_otel-0.2.1.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_observability_otel-0.2.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.2.1.tar.gz
Algorithm Hash digest
SHA256 9fcd919b6af8111fb77b8a30e130ba1d24a9816ca66063a3f2e180b73e41dae2
MD5 ca21691116c5605a881c7845192ad127
BLAKE2b-256 235ab91cf89aad6c1c448bd54dda940ea43a5af338cb4a694da2bbdf87cbece7

See more details on using hashes here.

File details

Details for the file llama_index_observability_otel-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 de4e933b5372b5a53fd42b07d33e46f2bd1fc8b21bccf8d8ff345cb4b5f9c8d9
MD5 151cde2884816cbf8a1acafa034342dc
BLAKE2b-256 4993aa31c08b7b6be0f44986eb04170489810aa06c22c5661b90e38cfe145415

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page