Skip to main content

ATI LlamaIndex integration: retrieval/synthesis callbacks -> ATI spans

Project description

ATI Integration for LlamaIndex

This package provides OpenTelemetry instrumentation for LlamaIndex applications using Iocane ATI.

It captures:

  • LLM Calls: Requests to backend models.
  • Retrieval: Vector store lookups and query engine operations.
  • Synthesizing: Response generation steps.

Installation

pip install ati-integrations-llamaindex opentelemetry-sdk opentelemetry-exporter-otlp

Configuration

Set the standard OpenTelemetry environment variables to point to your Iocane collector:

export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.iocane.ai/v1/traces"
export OTEL_EXPORTER_OTLP_HEADERS="x-iocane-key=YOUR_KEY,x-ati-env=YOUR_ENV_ID"
export OTEL_SERVICE_NAME="my-llamaindex-app"

Usage

Here is the robust pattern for instrumenting LlamaIndex applications.

Important: LlamaIndex creates many internal components. It is crucial to initialize OpenTelemetry before importing or initializing heavily dependent LlamaIndex modules if possible, and ensuring the global tracer provider is correctly set.

import os
from ati_llamaindex import LlamaIndexInstrumentor
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

# OpenTelemetry Imports
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

def main():
    # 1. Configure OpenTelemetry (Robust Pattern)
    resource = Resource.create(attributes={SERVICE_NAME: "my-llamaindex-service"})
    
    try:
        # Try to set the global provider
        provider = TracerProvider(resource=resource)
        trace.set_tracer_provider(provider)
    except Exception:
        # If it fails (e.g., already set), ignore and fetch the active one below
        pass

    # ALWAYS get the global provider to ensure we attach to the active pipeline
    provider = trace.get_tracer_provider()

    # 2. Configure Exporter (Iocane)
    # Ensure usage of the correct endpoint from env or default
    endpoint = os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT")
    if endpoint and endpoint.endswith("/v1/traces"):
        exporter = OTLPSpanExporter(endpoint=endpoint)
    else:
        exporter = OTLPSpanExporter()
        
    if hasattr(provider, "add_span_processor"):
        provider.add_span_processor(BatchSpanProcessor(exporter))
    else:
        print("WARNING: TracerProvider does not support add_span_processor.")

    # 3. Instrument LlamaIndex
    # This should be done before creating indices or query engines
    LlamaIndexInstrumentor().instrument()

    try:
        # 4. Your LlamaIndex Code
        # documents = SimpleDirectoryReader("data").load_data()
        # index = VectorStoreIndex.from_documents(documents)
        # query_engine = index.as_query_engine()
        # response = query_engine.query("What represents this data?")
        # print(response)
        pass

    finally:
        # 5. Flush traces
        LlamaIndexInstrumentor().uninstrument()
        if hasattr(provider, "shutdown"):
            provider.shutdown()

if __name__ == "__main__":
    main()

Environment Variables for Instrumentation

Variable Description Default
ATI_CAPTURE_PAYLOADS set to true to capture query text and response content as span events false

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ati_integrations_llamaindex-0.1.1.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ati_integrations_llamaindex-0.1.1-py3-none-any.whl (4.4 kB view details)

Uploaded Python 3

File details

Details for the file ati_integrations_llamaindex-0.1.1.tar.gz.

File metadata

File hashes

Hashes for ati_integrations_llamaindex-0.1.1.tar.gz
Algorithm Hash digest
SHA256 2875919260fd67628beecfc8a84097e4731a595e2c578f1413109c211618d2dd
MD5 c95f1079f982d849187f3ad68d0869b6
BLAKE2b-256 2856898348bd4e90efc6c843cd0276168b40e99d9eeb7b0336e367ec255c1ecd

See more details on using hashes here.

File details

Details for the file ati_integrations_llamaindex-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for ati_integrations_llamaindex-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 138828dc37d90e8fcb19692fec97a131b7707c92f2e4da0eae9ed70ecf9e1f8e
MD5 5c1348907205bef7fca1a8b7c364eca7
BLAKE2b-256 d9ef620cd63cc0ab706aa30cb18967002c779820a89df3af5d1cbefd925ffb81

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page