Lightweight Python tracer for Watchlog AI monitoring
Project description
python-ai-tracer
python-ai-tracer is a Python instrumentation library for tracing AI/LLM calls and exporting spans to a Watchlog agent. It supports batching, persistence, and automatic environment detection (local vs. Kubernetes).
Features
- Lightweight tracer: start and end spans to capture timing, cost, tokens, model, provider, input/output.
- Batching & persistence: automatically buffers spans on disk and flushes in the background or on-demand.
- Kubernetes-friendly: auto-detects K8s environment to select the correct agent URL.
- Configurable: customize batch size, flush intervals, retry behavior, sensitive field redaction, and more.
Installation
pip install python-ai-tracer
Usage
from watchlog_ai_tracer import WatchlogTracer
import time
# Initialize tracer with your app name
tracer = WatchlogTracer(app="myapp")
# Start a new trace
tracer.start_trace()
# Root span
root_id = tracer.start_span("handle_request", metadata={"feature": "ai-summary"})
# Child span: validation
val_id = tracer.child_span(root_id, "validate_input")
# end validation span
tracer.end_span(val_id, {
"tokens": 0,
"cost": 0,
"model": "",
"provider": "",
"input": "",
"output": ""
})
# Child span: LLM call
llm_id = tracer.child_span(root_id, "call_llm")
# simulate work
time.sleep(0.5)
tracer.end_span(llm_id, {
"tokens": 42,
"cost": 0.002,
"model": "gpt-4",
"provider": "openai",
"input": "Summarize: Hello world...",
"output": "Hello world summary."
})
# End root span
tracer.end_span(root_id, {})
# Flush all spans to agent immediately
tracer.send()
API
WatchlogTracer(config)
app(str, required): your application name.agent_url(str, optional): override the default agent endpoint.batch_size(int): number of spans per HTTP batch.flush_on_span_count(int): completed spans to auto-enqueue.auto_flush_interval(int): ms between background flushes.max_retries(int): HTTP retry attempts.max_queue_size(int): max spans to keep on disk.queue_item_ttl(int): TTL (ms) for queued spans.- ... (see docstrings in
__init__).
Methods
start_trace() -> trace_id: begins a new trace.start_span(name, metadata={}) -> span_id: starts a new span.child_span(parent_id, name, metadata={}) -> span_id: shorthand for nested span.end_span(span_id, data={}): ends a span, records metrics.send(): ends any open spans, enqueues, and immediately flushes all spans to the agent.flush_queue(): manually trigger a background queue flush (async).
Configuration & Environment
The agent URL is determined in the following priority order:
- Explicit config parameter:
agent_urlinWatchlogTracerinitialization - Environment variable:
WATCHLOG_AGENT_URL - Auto-detection:
- Local: defaults to
http://127.0.0.1:3774 - Kubernetes: if running in K8s (service account & DNS check), auto-switches to
http://watchlog-node-agent.monitoring.svc.cluster.local:3774
- Local: defaults to
Examples
# Option 1: Pass agent_url directly
tracer = WatchlogTracer(
app="myapp",
agent_url="http://my-custom-agent:3774"
)
# Option 2: Use environment variable
# export WATCHLOG_AGENT_URL=http://my-custom-agent:3774
tracer = WatchlogTracer(app="myapp")
Contributing
PRs welcome! Please file issues on GitHub.
Documentation generated from version 1.0.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file watchlog_ai_tracer-1.0.0.tar.gz.
File metadata
- Download URL: watchlog_ai_tracer-1.0.0.tar.gz
- Upload date:
- Size: 5.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5c7a19d35d3b8c4cc90e163dac25da08cc31b519dbaba7cad4d8150ec6470e04
|
|
| MD5 |
a739817a692cac2eaadbe75e09310f9a
|
|
| BLAKE2b-256 |
9153cb7dbe29dafb70206d4ac5ff0f1f6f0745a49119a4b50b32ba3a83b3ab56
|