Skip to main content

A Python library that provides core functionality to send AI metering data to Revenium with decorator support.

Project description

Revenium Core Middleware

PyPI version Python Versions Documentation License: MIT

A foundational library that provides core metering functionality shared across all Revenium AI provider-specific middleware implementations (OpenAI, Anthropic, Ollama, etc).

Features

  • ** Shared Core Functionality**: Provides the essential metering infrastructure used by all Revenium middleware implementations
  • ** Asynchronous Processing**: Background thread management for non-blocking metering operations
  • ** Graceful Shutdown**: Ensures all metering data is properly sent even during application shutdown
  • ** Provider Agnostic**: Designed to work with any AI provider through specific middleware implementations

Installation

pip install revenium-middleware

Usage

Direct Usage

While this package is primarily intended as a dependency for provider-specific middleware, you can use it directly:

from revenium_middleware import client, run_async_in_thread, shutdown_event

# Record usage directly
client.record_usage(
    model="gpt-4o",
    prompt_tokens=500,
    completion_tokens=200,
    user_id="user123",
    session_id="session456"
)

# Run async metering tasks in background threads
async def async_metering_task():
    await client.async_record_usage(
        model="gpt-3.5-turbo",
        prompt_tokens=300,
        completion_tokens=150,
        user_id="user789"
    )

thread = run_async_in_thread(async_metering_task())

# Application continues while metering happens in background

️ Building Provider-Specific Middleware

This library is designed to be extended by provider-specific middleware implementations:

from revenium_middleware import client, run_async_in_thread

# Example of how a provider-specific middleware might use the core
def record_provider_usage(response_data, metadata):
    # Extract token counts from provider-specific response format
    prompt_tokens = response_data.usage.prompt_tokens
    completion_tokens = response_data.usage.completion_tokens
    
    # Use the core client to record the usage
    run_async_in_thread(
        client.async_record_usage(
            model=response_data.model,
            prompt_tokens=prompt_tokens,
            completion_tokens=completion_tokens,
            **metadata
        )
    )

Tool Metering

The meter_tool decorator lets you meter arbitrary tool/function calls (web scrapers, image generators, database lookups, etc.) alongside your LLM API metering. This is available via revenium_metering v6.8.2+.

from revenium_middleware import meter_tool, configure

# Configure the metering client
configure(
    metering_url="https://api.revenium.io/meter",
    api_key="your-api-key",
)

# Decorate any tool function to automatically meter it
@meter_tool("my-web-scraper", operation="scrape")
def scrape_website(url):
    # Your scraping logic here
    return {"pages": 5, "data_mb": 2.3}

# The decorator captures timing, success/failure, and reports to Revenium
result = scrape_website("https://example.com")

You can also report tool calls manually:

from revenium_middleware import report_tool_call

report_tool_call(
    tool_id="my-tool",
    operation="fetch",
    duration_ms=1234,
    success=True,
    usage_metadata={"records": 42},
)

Compatibility

  • Python 3.8+
  • Compatible with all Revenium provider-specific middleware implementations

Logging

This module uses Python's standard logging system. You can control the log level by setting the REVENIUM_LOG_LEVEL environment variable:

# Enable debug logging
export REVENIUM_LOG_LEVEL=DEBUG

# Or when running your script
REVENIUM_LOG_LEVEL=DEBUG python your_script.py

Available log levels:

  • DEBUG: Detailed debugging information
  • INFO: General information (default)
  • WARNING: Warning messages only
  • ERROR: Error messages only
  • CRITICAL: Critical error messages only

Documentation

For detailed documentation, visit docs.revenium.io

Contributing

See CONTRIBUTING.md

Code of Conduct

See CODE_OF_CONDUCT.md

Security

See SECURITY.md

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built by the Revenium team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

revenium_middleware-0.4.2.tar.gz (14.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

revenium_middleware-0.4.2-py3-none-any.whl (13.5 kB view details)

Uploaded Python 3

File details

Details for the file revenium_middleware-0.4.2.tar.gz.

File metadata

  • Download URL: revenium_middleware-0.4.2.tar.gz
  • Upload date:
  • Size: 14.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for revenium_middleware-0.4.2.tar.gz
Algorithm Hash digest
SHA256 8a60cd15da86f1935e402d425fa4d07ac7c5d49d063b5b5aac0b967e99e68f97
MD5 c35257620ac29f472cb607cc3e3d797c
BLAKE2b-256 6d479bb46b5af81cfd015ab9479157d7173d030d407bcb8f2578f343e4bd479a

See more details on using hashes here.

File details

Details for the file revenium_middleware-0.4.2-py3-none-any.whl.

File metadata

File hashes

Hashes for revenium_middleware-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 61587f466ab4ce3c18376c74e890d05a7eb1e71a26603fd703da230eefbd3b41
MD5 ae9178bc99b52c141556d40118e3b247
BLAKE2b-256 b9e1cb7a97b23108c5cb9f2ed65e251f2aa89e5b8abeea087b6cb6ed8e7a96b3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page