Skip to main content

Comprehensive OpenTelemetry auto-instrumentation for LLM/GenAI applications

Project description

TraceVerde

TraceVerde Logo

The most comprehensive OpenTelemetry auto-instrumentation library for LLM/GenAI applications

Trace from OpenTelemetry traces. Verde meaning green - for sustainable, transparent AI observability.

Documentation | Examples | Discord | PyPI


PyPI version Python Versions License Downloads Downloads/Month GitHub Stars OpenTelemetry CI/CD


Get Started in 30 Seconds

pip install genai-otel-instrument
import genai_otel
genai_otel.instrument()

# Your existing code works unchanged - traces, metrics, and costs are captured automatically
import openai
client = openai.OpenAI()
response = client.chat.completions.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "Hello!"}])

That's it. No wrappers, no decorators, no config files. Every LLM call, database query, and agent interaction is automatically traced with full cost breakdown.

Why TraceVerde?

Feature TraceVerde OpenLIT Traceloop/OpenLLMetry Langfuse
Zero-code setup Yes Yes Yes SDK required
LLM providers 19+ 25+ 15+ Via integrations
Multi-agent frameworks 8 (CrewAI, LangGraph, ADK, AutoGen, OpenAI Agents, Pydantic AI, etc.) Limited Limited Limited
Cost tracking Automatic (1,050+ models) Manual config Manual config Manual config
GPU metrics (NVIDIA + AMD) Yes No No No
MCP tool instrumentation Yes (databases, caches, vector DBs, queues) Limited Limited No
Evaluation (PII, toxicity, bias, hallucination, prompt injection) Built-in (6 detectors) No No Separate service
OpenTelemetry native Yes Yes Yes Partial
License Apache-2.0 Apache-2.0 Apache-2.0 MIT

What Gets Instrumented?

LLM Providers (19+)

OpenAI, OpenRouter, Anthropic, Google AI, Google GenAI, AWS Bedrock, Azure OpenAI, Cohere, Mistral AI, Together AI, Groq, Ollama, Vertex AI, Replicate, HuggingFace, SambaNova, Sarvam AI, Hyperbolic, LiteLLM

See all providers with examples >>

Multi-Agent Frameworks (8)

CrewAI, LangGraph, Google ADK, AutoGen, AutoGen AgentChat, OpenAI Agents SDK, Pydantic AI, AWS Bedrock Agents

See all frameworks with examples >>

MCP Tools (20+)

Databases: PostgreSQL, MySQL, MongoDB, SQLAlchemy, TimescaleDB, OpenSearch, Elasticsearch, FalkorDB Caching: Redis | Queues: Kafka, RabbitMQ | Storage: MinIO Vector DBs: Pinecone, Weaviate, Qdrant, ChromaDB, Milvus, FAISS, LanceDB

See all MCP tools >>

Built-in Evaluation (6 Detectors)

PII Detection (GDPR/HIPAA/PCI-DSS), Toxicity Detection, Bias Detection, Prompt Injection Detection, Restricted Topics, Hallucination Detection

See all evaluation features with examples >>

Screenshots

OpenAI Traces

OpenAI traces with token usage, costs, and latency

More screenshots

Ollama (Local LLM)

Ollama Traces

SmolAgents with Tool Calls

SmolAgent Traces

GPU Metrics

GPU Metrics

OpenSearch Dashboard

OpenSearch Dashboard

Key Features

Automatic Cost Tracking

1,050+ models across 30+ providers with per-request cost breakdown. Supports differential pricing (prompt vs completion), reasoning tokens, cache pricing, and custom model pricing.

# Cost tracking is enabled by default - just instrument and go
genai_otel.instrument()

# Or add custom pricing for proprietary models
export GENAI_CUSTOM_PRICING_JSON='{"chat":{"my-model":{"promptPrice":0.001,"completionPrice":0.002}}}'

Cost tracking guide >>

GPU Metrics (NVIDIA + AMD)

Real-time monitoring of utilization, memory, temperature, power, PCIe throughput, throttling, and ECC errors. Multi-GPU aggregate metrics included.

pip install genai-otel-instrument[gpu]      # NVIDIA
pip install genai-otel-instrument[amd-gpu]  # AMD

GPU metrics guide >>

Multi-Agent Tracing

Complete span hierarchy for agent frameworks with automatic context propagation:

Crew Execution
  +-- Agent: Senior Researcher (gpt-4)
  |     +-- Task: Research OpenTelemetry
  |           +-- openai.chat.completions (tokens: 1250, cost: $0.03)
  +-- Agent: Technical Writer (ollama:llama2)
        +-- Task: Write blog post
              +-- ollama.chat (tokens: 890, cost: $0.00)

Multimodal Observability (v1.1.0)

First-class capture of image, audio, video, and document content parts on OpenAI, Anthropic, Google Gemini, and Groq spans. Bytes are offloaded to your configured object store (MinIO / S3 / filesystem / HTTP) and referenced from spans by URI — they never appear inline in span attributes.

# Opt in (default is off — text-only behaviour is byte-identical to 1.0.x)
export GENAI_OTEL_MEDIA_CAPTURE_MODE=full
export GENAI_OTEL_MEDIA_STORE=minio
export GENAI_OTEL_MEDIA_STORE_ENDPOINT=http://localhost:9000
export GENAI_OTEL_MEDIA_STORE_ACCESS_KEY=...
export GENAI_OTEL_MEDIA_STORE_SECRET_KEY=...
# Optional: plug in a redactor before upload
export GENAI_OTEL_MEDIA_REDACTOR=genai_otel.media.redactors.face_blur

Spans get a flat, queryable attribute namespace — gen_ai.prompt.{n}.content.{m}.{type, media_uri, media_mime_type, media_byte_size, media_source} — that is being proposed upstream to OpenTelemetry semantic-conventions (issue #3672).

Multimodal guide >>

Safety & Evaluation

genai_otel.instrument(
    enable_pii_detection=True,       # GDPR/HIPAA/PCI-DSS compliance
    enable_toxicity_detection=True,  # Perspective API + Detoxify
    enable_bias_detection=True,      # 8 bias categories
    enable_prompt_injection_detection=True,
    enable_hallucination_detection=True,
    enable_restricted_topics=True,
)

Evaluation guide with 50+ examples >>

Configuration

# Required
export OTEL_SERVICE_NAME=my-llm-app
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318

# Optional
export GENAI_ENABLE_GPU_METRICS=true
export GENAI_ENABLE_COST_TRACKING=true
export GENAI_SAMPLING_RATE=0.5                    # Reduce volume in production
export GENAI_ENABLED_INSTRUMENTORS=openai,crewai  # Select specific instrumentors

Full configuration reference >>

Backend Integration

Works with any OpenTelemetry-compatible backend:

Jaeger, Zipkin, Prometheus, Grafana, Datadog, New Relic, Honeycomb, AWS X-Ray, Google Cloud Trace, Elastic APM, Splunk, SigNoz, self-hosted OTel Collector

Pre-built Grafana dashboard templates included.

Examples

90+ ready-to-run examples covering every provider, framework, and evaluation feature:

examples/
+-- openai/              # OpenAI chat, embeddings
+-- anthropic/           # Anthropic + PII/toxicity detection
+-- ollama/              # Local models + all evaluation features
+-- crewai_example.py    # Multi-agent crew orchestration
+-- langgraph_example.py # Stateful graph workflows
+-- google_adk_example.py # Google Agent Development Kit
+-- autogen_example.py   # Microsoft AutoGen agents
+-- pii_detection/       # 10 PII examples (GDPR, HIPAA, PCI-DSS)
+-- toxicity_detection/  # 8 toxicity examples
+-- bias_detection/      # 8 bias examples (hiring compliance, etc.)
+-- prompt_injection/    # 6 injection defense examples
+-- hallucination/       # 4 hallucination detection examples
+-- ...                  # And many more

Browse all examples >>

Who Uses TraceVerde?

TraceVerde is used by developers and teams building production GenAI applications. If you're using TraceVerde, we'd love to hear from you!

Add your company | Join Discord

Community

License

Copyright 2025 Kshitij Thakkar. Licensed under the Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

genai_otel_instrument-1.2.1.tar.gz (2.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

genai_otel_instrument-1.2.1-py3-none-any.whl (266.3 kB view details)

Uploaded Python 3

File details

Details for the file genai_otel_instrument-1.2.1.tar.gz.

File metadata

  • Download URL: genai_otel_instrument-1.2.1.tar.gz
  • Upload date:
  • Size: 2.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for genai_otel_instrument-1.2.1.tar.gz
Algorithm Hash digest
SHA256 f616d99469bf245c05f8f1bd632be00386c296363e469b79dff8c3aafc301829
MD5 89b0b441f4921b9fba746c7a282c62f5
BLAKE2b-256 1f0d7c70572727e235b0a63ed7a09460bfa54e790e39f9a9f33a56383cbf370d

See more details on using hashes here.

File details

Details for the file genai_otel_instrument-1.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for genai_otel_instrument-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7841f375aaa9c9e79926c13658657a59d61e1a902e12fbac61ab4c00b6303f80
MD5 98a5efd60055d878cf5de43bc366ac1a
BLAKE2b-256 8ef3059d9ceef456412659dda3db58fb0dc588aa9f175286871afbafba5e0b43

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page