A Python library that meters LiteLLM usage to Revenium with context-based metadata injection and framework integrations.
Project description
Revenium Middleware for LiteLLM
Transparent Python middleware for automatic Revenium usage tracking with LiteLLM
A professional-grade Python middleware that seamlessly integrates with LiteLLM to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features drop-in integration with zero code changes required and supports both client-side middleware and server-side proxy callbacks.
Features
- Seamless Integration - Drop-in middleware, just import and go
- Optional Metadata - Track users, organizations, and business context (all fields optional)
- Two Integration Patterns - Client-side middleware or server-side proxy callbacks
- Decorator-Based Tracking - Simple
@track_agentand@track_taskdecorators for automatic metadata injection - CrewAI Integration - Pre-built wrapper for AI agent frameworks
- All Providers - Works with any LLM provider supported by LiteLLM
- Fire-and-Forget - Never blocks your application flow
- Accurate Pricing - Automatic cost calculation based on model and tokens
Getting Started
1. Install Package
# Create project directory and navigate to it
mkdir my-litellm-project
cd my-litellm-project
# Create virtual environment
python -m venv .venv
# Activate virtual environment
source .venv/bin/activate
# Install packages (run after activation)
pip install revenium-middleware-litellm python-dotenv
# For CrewAI support
pip install "revenium-middleware-litellm[crewai]"
2. Configure Environment Variables
Create a .env file in your project root. See .env.example for all available configuration options.
Minimum required configuration:
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
REVENIUM_METERING_BASE_URL=https://api.revenium.ai
LITELLM_PROXY_URL=https://your-litellm-proxy.com
LITELLM_API_KEY=sk-your_proxy_key_here
NOTE: Replace the placeholder values with your actual API keys.
3. Run Your First Example
For complete examples and usage patterns, see examples/README.md.
Quick start:
# Run the getting started example
python examples/getting_started.py
Or use this simple inline code:
from dotenv import load_dotenv
load_dotenv() # Load environment variables from .env file
import revenium_middleware_litellm_client.middleware # Auto-initializes on import
import litellm
import os
# Configure LiteLLM to use the proxy
litellm.api_base = os.getenv("LITELLM_PROXY_URL")
litellm.api_key = os.getenv("LITELLM_API_KEY")
response = litellm.completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Your LiteLLM API calls here - automatically metered
That's it! The middleware automatically meters all LiteLLM API calls.
Requirements
- Python 3.8+
- LiteLLM 1.0.0+
- Works with all LLM providers supported by LiteLLM
What Gets Tracked
The middleware automatically captures comprehensive usage data:
Usage Metrics
- Token Counts - Input tokens, output tokens, total tokens
- Model Information - Model name, provider, API version
- Request Timing - Request duration, response time
- Cost Calculation - Estimated costs based on current pricing
Business Context (Optional)
- User Tracking - Subscriber ID, email, credentials
- Organization Data - Organization ID, subscription ID, product ID
- Task Classification - Task type, agent identifier, trace ID
- Quality Metrics - Response quality scores
Technical Details
- API Endpoints - Chat completions via LiteLLM
- Request Types - Streaming vs non-streaming
- Error Tracking - Failed requests, error types
- Provider Info - LLM provider detection via LiteLLM
Metadata Fields
Add business context to track usage by organization, user, task type, or custom fields. Pass a usage_metadata dictionary with any of these optional fields:
| Field | Description | Use Case |
|---|---|---|
trace_id |
Unique identifier for session or conversation tracking | Link multiple API calls together for debugging, user session analytics, or distributed tracing across services |
task_type |
Type of AI task being performed | Categorize usage by workload (e.g., "chat", "code-generation", "doc-summary") for cost analysis and optimization |
subscriber.id |
Unique user identifier | Track individual user consumption for billing, rate limiting, or user analytics |
subscriber.email |
User email address | Identify users for support, compliance, or usage reports |
subscriber.credential.name |
Authentication credential name | Track which API key or service account made the request |
subscriber.credential.value |
Authentication credential value | Associate usage with specific credentials for security auditing |
organization_id |
Organization or company identifier | Multi-tenant cost allocation, usage quotas per organization |
subscription_id |
Subscription plan identifier | Track usage against subscription limits, identify plan upgrade opportunities |
product_id |
Your product or feature identifier | Attribute AI costs to specific features in your application (e.g., "chatbot", "email-assistant") |
agent |
AI agent or bot identifier | Distinguish between multiple AI agents or automation workflows in your system |
response_quality_score |
Custom quality rating (0.0-1.0) | Track user satisfaction or automated quality metrics for model performance analysis |
Resources:
- API Reference - Complete metadata field documentation
Configuration Options
Environment Variables
For a complete list of all available environment variables with examples, see .env.example.
Key variables:
REVENIUM_METERING_API_KEY- Your Revenium API key (required)REVENIUM_METERING_BASE_URL- Revenium API endpoint (default: https://api.revenium.ai)LITELLM_PROXY_URL- Your LiteLLM proxy URLLITELLM_API_KEY- Your LiteLLM proxy API keyREVENIUM_LOG_LEVEL- Logging level (DEBUG, INFO, WARNING, ERROR)
Examples
The package includes comprehensive examples in the examples/ directory.
Getting Started
python examples/getting_started.py
Available Examples
| Example | File | Description |
|---|---|---|
| Getting Started | getting_started.py |
Basic client middleware with metadata |
| Proxy Headers | litellm_proxy_example.py |
Server-side via HTTP headers |
| CrewAI Integration | crewai_decorator_example.py |
Multi-agent workflow tracking |
See examples/README.md for detailed documentation of all examples.
Decorator-Based Tracking
Use decorators for automatic metadata injection:
| Decorator | Purpose |
|---|---|
@track_agent() |
Identify the AI agent |
@track_task() |
Classify the type of work |
@track_trace() |
Set trace ID for distributed tracing |
@track_organization() |
Track multi-tenant organizations |
@track_subscription() |
Track subscription-based billing |
@track_product() |
Track product-specific usage |
@track_subscriber() |
Identify end users |
@track_quality() |
Track response quality scores |
All decorators support static values, extraction from function arguments (name_from_arg), or extraction from object attributes (name_from_attr).
See examples/README.md for detailed decorator documentation and usage patterns.
Proxy Middleware
For server-side integration, add the callback to your LiteLLM config.yaml:
litellm_settings:
callbacks: ["revenium_middleware_litellm_proxy.middleware.proxy_handler_instance"]
When using the LiteLLM proxy, pass metadata via HTTP headers (x-revenium-*).
See examples/README.md for proxy header reference and examples.
CrewAI Integration
Pre-built wrapper for tracking CrewAI agent executions.
Note: CrewAI requires Python 3.12 or earlier (Python 3.13+ not yet supported by CrewAI dependencies).
See CrewAI Integration Guide for detailed documentation.
Logging
Control log level via environment variable:
export REVENIUM_LOG_LEVEL=DEBUG # DEBUG, INFO, WARNING, ERROR, CRITICAL
Documentation
- Full Documentation
- Examples README - Detailed code examples and decorator reference
- CrewAI Integration Guide
Contributing
See CONTRIBUTING.md
Code of Conduct
Security
See SECURITY.md
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
For issues, feature requests, or contributions:
- Website: www.revenium.ai
- GitHub Repository: revenium/revenium-middleware-litellm-proxy-python
- Issues: Report bugs or request features
- Documentation: docs.revenium.io
- Email: support@revenium.io
Built by Revenium
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file revenium_middleware_litellm-0.2.0.tar.gz.
File metadata
- Download URL: revenium_middleware_litellm-0.2.0.tar.gz
- Upload date:
- Size: 45.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7800825c2a9dd2cf02cf731e6424a7db3c9f93899472a6dc88231420258e620e
|
|
| MD5 |
5a888f0c1f4d5961c3da18ed613a7676
|
|
| BLAKE2b-256 |
02c86a9abbcf84ce319ca6311565675756f62db874c4cf4d5462a75661da7e07
|
File details
Details for the file revenium_middleware_litellm-0.2.0-py3-none-any.whl.
File metadata
- Download URL: revenium_middleware_litellm-0.2.0-py3-none-any.whl
- Upload date:
- Size: 27.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a642b09b62d172b0483a92c9476e721d4ee12dc159f3272d827ea859c1244979
|
|
| MD5 |
7b7417e99240b99b9149af53660d4f07
|
|
| BLAKE2b-256 |
e51a2c63841ab8a3dc7d68cbf664a26f0d1fa6e65e69fc25fcf4edcac5887249
|