Skip to main content

The official Revenium Python SDK — unified AI metering middleware for OpenAI, Anthropic, Google, Ollama, LiteLLM, and Perplexity.

Project description

Revenium Python SDK

PyPI version Python Versions Documentation License: MIT

The official Revenium Python SDK — unified AI metering middleware for deeply attributed AI usage metrics. Supports OpenAI, Anthropic, Google (Gemini/Vertex AI), Ollama, LiteLLM, and Perplexity.

Features

  • Unified SDK: Single package with middleware for all major AI providers
  • Asynchronous Processing: Background thread management for non-blocking metering operations
  • Graceful Shutdown: Ensures all metering data is properly sent even during application shutdown
  • Decorator Support: @revenium_meter and @revenium_metadata for easy integration
  • Tool Metering: Meter arbitrary tool/function calls alongside LLM API metering

Supported Providers

Provider Extra Install Command
OpenAI openai pip install revenium-python-sdk[openai]
Anthropic anthropic pip install revenium-python-sdk[anthropic]
Google Gemini google-genai pip install revenium-python-sdk[google-genai]
Google Vertex AI google-vertex pip install revenium-python-sdk[google-vertex]
Ollama ollama pip install revenium-python-sdk[ollama]
LiteLLM litellm pip install revenium-python-sdk[litellm]
LiteLLM Proxy litellm-proxy pip install revenium-python-sdk[litellm-proxy]
Perplexity (OpenAI) perplexity-openai pip install revenium-python-sdk[perplexity-openai]
Perplexity (Native) perplexity-native pip install revenium-python-sdk[perplexity-native]
LangChain langchain pip install revenium-python-sdk[langchain]

Installation

# Core SDK
pip install revenium-python-sdk

# With a specific provider
pip install revenium-python-sdk[openai]

# Multiple providers
pip install revenium-python-sdk[openai,anthropic,ollama]

Quick Start

from revenium_middleware import client, run_async_in_thread, shutdown_event

# Record usage directly
client.record_usage(
    model="gpt-4o",
    prompt_tokens=500,
    completion_tokens=200,
    user_id="user123",
    session_id="session456"
)

# Run async metering tasks in background threads
async def async_metering_task():
    await client.async_record_usage(
        model="gpt-3.5-turbo",
        prompt_tokens=300,
        completion_tokens=150,
        user_id="user789"
    )

thread = run_async_in_thread(async_metering_task())

# Application continues while metering happens in background

Provider-Specific Usage

Each provider has its own middleware module. See the examples/ directory for detailed usage:

  • examples/openai/ — OpenAI and Azure OpenAI examples
  • examples/anthropic/ — Anthropic and Bedrock examples
  • examples/google/ — Google AI and Vertex AI examples
  • examples/ollama/ — Ollama examples
  • examples/litellm/ — LiteLLM client and proxy examples
  • examples/perplexity/ — Perplexity examples

Tool Metering

The meter_tool decorator lets you meter arbitrary tool/function calls (web scrapers, image generators, database lookups, etc.) alongside your LLM API metering. This is available via revenium_metering v6.8.2+.

from revenium_middleware import meter_tool, configure

# Configure the metering client
configure(
    metering_url="https://api.revenium.io/meter",
    api_key="your-api-key",
)

# Decorate any tool function to automatically meter it
@meter_tool("my-web-scraper", operation="scrape")
def scrape_website(url):
    # Your scraping logic here
    return {"pages": 5, "data_mb": 2.3}

# The decorator captures timing, success/failure, and reports to Revenium
result = scrape_website("https://example.com")

You can also report tool calls manually:

from revenium_middleware import report_tool_call

report_tool_call(
    tool_id="my-tool",
    operation="fetch",
    duration_ms=1234,
    success=True,
    usage_metadata={"records": 42},
)

Compatibility

  • Python 3.8+
  • Compatible with all supported AI providers

Logging

This module uses Python's standard logging system. You can control the log level by setting the REVENIUM_LOG_LEVEL environment variable:

# Enable debug logging
export REVENIUM_LOG_LEVEL=DEBUG

# Or when running your script
REVENIUM_LOG_LEVEL=DEBUG python your_script.py

Available log levels:

  • DEBUG: Detailed debugging information
  • INFO: General information (default)
  • WARNING: Warning messages only
  • ERROR: Error messages only
  • CRITICAL: Critical error messages only

Documentation

For detailed documentation, visit docs.revenium.io

Contributing

See CONTRIBUTING.md

Code of Conduct

See CODE_OF_CONDUCT.md

Security

See SECURITY.md

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built by the Revenium team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

revenium_python_sdk-0.1.0.tar.gz (138.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

revenium_python_sdk-0.1.0-py3-none-any.whl (167.7 kB view details)

Uploaded Python 3

File details

Details for the file revenium_python_sdk-0.1.0.tar.gz.

File metadata

  • Download URL: revenium_python_sdk-0.1.0.tar.gz
  • Upload date:
  • Size: 138.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for revenium_python_sdk-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f7c2eee7518c831fc04e173f4b1be81d256c5984a6f70f88dbb03a0d6aed56f1
MD5 edfc1a86ccfd6f962cae665b288f3531
BLAKE2b-256 009f010babff14618af9bd884a44eebaf729b0f155212defc8d75b86fa4f07a8

See more details on using hashes here.

File details

Details for the file revenium_python_sdk-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for revenium_python_sdk-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 05adc4e3f6f920af29afc8635915cb770d863246f8047a8e86d9492853f02db5
MD5 3acfda29c133ab4dd55668ee468d7ad9
BLAKE2b-256 51239fc578883aa817380f44a48bc878ee171cfea0998379d822af1dd36d21f4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page