Skip to main content

Llumo Telemetry SDK for LLM Observability

Project description

LLumo Telemetry SDK (Python)

A powerful telemetry SDK designed to instrument LLM operations via OpenAI, Anthropic, and LangChain and send formatted telemetry data to your backend telemetry server.

Installation

  1. Create a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    
  2. Install dependencies:

    pip install -r requirements.txt
    

Setup Guide

Place this initialization setup at the entry point of your application, before you initialize any LLM clients.

```python
from llumo_inference import initSDK, TelemetryConfig

# Initialize the telemetry
config = TelemetryConfig(
    endpoint='http://localhost:4455/api/v1/telemetry',  # Your custom telemetry API endpoint
    authToken='your-auth-token',  # Optional Auth Bearer Token
    flushDelayMillis=500  # Span buffer flush interval (def: 500ms)
)

# Pass optional library instances if you need manual instrumentation
# config.libraries = {
#     "OpenAI": openai_client,
#     "Anthropic": anthropic_client
# }

initSDK(config)

print("Telemetry configured successfully.")

Configuration Options

Option Type Required Description
endpoint string Yes The URL of your telemetry ingestion server
authToken string No Optional Bearer token inside Auth header
flushDelayMillis int No Interval to ship logs in milliseconds. Defaults to 500ms
maxExportBatchSize int No Max payload size limits. Defaults to 50
libraries dict No Optional dict for injecting specific AI client instances

Features

  • Built-in Instrumentations: Supports OpenAI, Anthropic, Gemini (Vertex AI & Google GenAI), LangChain, requests, and urllib3.
  • Auto Data Sanitation: MongoDB-compliant key formatting automatically escapes problematic fields (. and $) before transmission.
  • Trace Exporters: Uses BatchSpanProcessor with a custom FormattingExporter for structured, ready-to-consume payloads.
  • Performance: Asynchronous-style exporting via OTel's native batching to minimize impact on application latency.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llumo_inference-0.1.1.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llumo_inference-0.1.1-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file llumo_inference-0.1.1.tar.gz.

File metadata

  • Download URL: llumo_inference-0.1.1.tar.gz
  • Upload date:
  • Size: 5.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for llumo_inference-0.1.1.tar.gz
Algorithm Hash digest
SHA256 f51a800061fd27afecac3ea25c7f4d44393ed676fa932f7b6cb11e6fcf726e31
MD5 90056f8243508a2422b5eaf6c29f3d34
BLAKE2b-256 600001d9f0bd87f4c7d5bafbb5f4ec7a0172251b22bc7a64f1f2e1b0e15e9181

See more details on using hashes here.

File details

Details for the file llumo_inference-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llumo_inference-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 58c2f465590d630efa3cdad7da604b2381d1431684559e95d4018b72009ce041
MD5 dd2020ce5e16f5ec5ca52b5d4f774ea8
BLAKE2b-256 01042672d0bf165dc0488c6d6eb70ca7a0c89507fe394be33f775f5c563b06a5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page