Skip to main content

Llumo Telemetry SDK for LLM Observability

Project description

LLumo Telemetry SDK (Python)

A powerful telemetry SDK designed to instrument LLM operations via OpenAI, Anthropic, and LangChain and send formatted telemetry data to your backend telemetry server.

Installation

  1. Create a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    
  2. Install dependencies:

    pip install -r requirements.txt
    

Setup Guide

Place this initialization setup at the entry point of your application, before you initialize any LLM clients.

```python
from llumo_inference import initSDK, TelemetryConfig

# Initialize the telemetry
config = TelemetryConfig(
    endpoint='http://localhost:4455/api/v1/telemetry',  # Your custom telemetry API endpoint
    authToken='your-auth-token',  # Optional Auth Bearer Token
    flushDelayMillis=500  # Span buffer flush interval (def: 500ms)
)

# Pass optional library instances if you need manual instrumentation
# config.libraries = {
#     "OpenAI": openai_client,
#     "Anthropic": anthropic_client
# }

initSDK(config)

print("Telemetry configured successfully.")

Configuration Options

Option Type Required Description
endpoint string Yes The URL of your telemetry ingestion server
authToken string No Optional Bearer token inside Auth header
flushDelayMillis int No Interval to ship logs in milliseconds. Defaults to 500ms
maxExportBatchSize int No Max payload size limits. Defaults to 50
libraries dict No Optional dict for injecting specific AI client instances

Features

  • Built-in Instrumentations: Supports OpenAI, Anthropic, Gemini (Vertex AI & Google GenAI), LangChain, requests, and urllib3.
  • Auto Data Sanitation: MongoDB-compliant key formatting automatically escapes problematic fields (. and $) before transmission.
  • Trace Exporters: Uses BatchSpanProcessor with a custom FormattingExporter for structured, ready-to-consume payloads.
  • Performance: Asynchronous-style exporting via OTel's native batching to minimize impact on application latency.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llumo_inference-0.1.0.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llumo_inference-0.1.0-py3-none-any.whl (6.0 kB view details)

Uploaded Python 3

File details

Details for the file llumo_inference-0.1.0.tar.gz.

File metadata

  • Download URL: llumo_inference-0.1.0.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.2

File hashes

Hashes for llumo_inference-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9047439d6169bd713fb4e86cf80ce9fe77604bb66bcd7d421736ad311f2a28b9
MD5 5ec5b03be7b012da466306068a0b1217
BLAKE2b-256 b85718b2d03ce1c3601e24bd011d0dc4f602af7eba7c51e92f2dbb0a3da2702d

See more details on using hashes here.

File details

Details for the file llumo_inference-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llumo_inference-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bbbb3b7aae5195ea9935f795c2e296ca4194038617d0053ef06dcda054dea9e7
MD5 d2dff9cd1eeb279e857787cf1f8177b5
BLAKE2b-256 9aaeec1b32fe0dfb7204dcab4763545cfa1805cfc51728e6e31f69f7d6a49bb2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page