Skip to main content

llama-index observability integration with OpenTelemetry

Project description

LlamaIndex OpenTelemetry Observability Integration

Installation

pip install llama-index-observability-otel

Usage

You can use the default OpenTelemetry observability class as follows:

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry()

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Or you can add some customization to the LlamaIndexOpenTelemetry class by, for example, set a custom span exporter, a custom service name, activating the debugging, set a custom LlamaIndex dispatcher name...

from llama_index.observability.otel import LlamaIndexOpenTelemetry
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
    OTLPSpanExporter,
)
from llama_index.core.llms import MockLLM
from llama_index.core.embeddings import MockEmbedding
from llama_index.core import Settings

# define a custom span exporter
span_exporter = OTLPSpanExporter("http://0.0.0.0:4318/v1/traces")

# initialize the instrumentation object
instrumentor = LlamaIndexOpenTelemetry(
    service_name_or_resource="my.test.service.1",
    span_exporter=span_exporter,
    debug=True,
)

if __name__ == "__main__":
    embed_model = MockEmbedding(embed_dim=256)
    llm = MockLLM()
    Settings.embed_model = embed_model
    # start listening!
    instrumentor.start_registering()
    # register events
    documents = SimpleDirectoryReader(
        input_dir="./data/paul_graham/"
    ).load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine(llm=llm)
    query_result = query_engine.query("Who is Paul?")
    query_result_one = query_engine.query("What did Paul do?")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_observability_otel-0.2.0.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_observability_otel-0.2.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.2.0.tar.gz
Algorithm Hash digest
SHA256 fcb69dfae5760b5439fb932b74964d37673bfd7d451ebfa2bb71b11084a14848
MD5 0f359c7bbbc777219754a14ab3b94051
BLAKE2b-256 e23bd4a962de96d5db646c984d6ed82e8fce85e03ad24206bbd12422e53b759d

See more details on using hashes here.

File details

Details for the file llama_index_observability_otel-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_observability_otel-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8e34f91b356a6acdb49040ae39a16f61c913705311a657bc031bb1ebe206d51d
MD5 f5071ac1bffe7435d4c45a2457bc545b
BLAKE2b-256 ed6768f79c2d6aaf847eac10aa0efad5cc83b3173b6490c8658a826d862eba04

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page