Skip to main content

A Python library that provides core functionality to send AI metering data to Revenium with decorator support.

Project description

Revenium Core Middleware

PyPI version Python Versions Documentation License: MIT

A foundational library that provides core metering functionality shared across all Revenium AI provider-specific middleware implementations (OpenAI, Anthropic, Ollama, etc).

Features

  • ** Shared Core Functionality**: Provides the essential metering infrastructure used by all Revenium middleware implementations
  • ** Decorator Support**: Optional decorators for selective metering and metadata injection
  • ** Asynchronous Processing**: Background thread management for non-blocking metering operations
  • ** Graceful Shutdown**: Ensures all metering data is properly sent even during application shutdown
  • ** Provider Agnostic**: Designed to work with any AI provider through specific middleware implementations

Installation

pip install revenium-middleware

Usage

Direct Usage

While this package is primarily intended as a dependency for provider-specific middleware, you can use it directly:

from revenium_middleware import client, run_async_in_thread, shutdown_event

# Record usage directly
client.record_usage(
    model="gpt-4o",
    prompt_tokens=500,
    completion_tokens=200,
    user_id="user123",
    session_id="session456"
)

# Run async metering tasks in background threads
async def async_metering_task():
    await client.async_record_usage(
        model="gpt-3.5-turbo",
        prompt_tokens=300,
        completion_tokens=150,
        user_id="user789"
    )

thread = run_async_in_thread(async_metering_task())

# Application continues while metering happens in background

Decorator Support (New in 0.4.0)

The core library now provides decorators for selective metering and metadata injection:

from revenium_middleware import revenium_meter, revenium_metadata

# Selective metering - only meter decorated functions
@revenium_meter(metadata={'task_type': 'analysis'})
def analyze_data(data):
    # AI API calls here will be metered
    pass

# Metadata injection - automatically inject metadata into all API calls
@revenium_metadata(org_id="acme", task_type="chat")
def chat_handler(message):
    # All AI API calls here automatically get the metadata
    pass

# Check if selective metering is enabled
from revenium_middleware import is_selective_metering_enabled
if is_selective_metering_enabled():
    # Only decorated functions will be metered
    pass

️ Building Provider-Specific Middleware

This library is designed to be extended by provider-specific middleware implementations:

from revenium_middleware import client, run_async_in_thread

# Example of how a provider-specific middleware might use the core
def record_provider_usage(response_data, metadata):
    # Extract token counts from provider-specific response format
    prompt_tokens = response_data.usage.prompt_tokens
    completion_tokens = response_data.usage.completion_tokens
    
    # Use the core client to record the usage
    run_async_in_thread(
        client.async_record_usage(
            model=response_data.model,
            prompt_tokens=prompt_tokens,
            completion_tokens=completion_tokens,
            **metadata
        )
    )

Compatibility

  • Python 3.8+
  • Compatible with all Revenium provider-specific middleware implementations

Logging

This module uses Python's standard logging system. You can control the log level by setting the REVENIUM_LOG_LEVEL environment variable:

# Enable debug logging
export REVENIUM_LOG_LEVEL=DEBUG

# Or when running your script
REVENIUM_LOG_LEVEL=DEBUG python your_script.py

Available log levels:

  • DEBUG: Detailed debugging information
  • INFO: General information (default)
  • WARNING: Warning messages only
  • ERROR: Error messages only
  • CRITICAL: Critical error messages only

Documentation

For detailed documentation, visit docs.revenium.io

Contributing

See CONTRIBUTING.md

Code of Conduct

See CODE_OF_CONDUCT.md

Security

See SECURITY.md

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built by the Revenium team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

revenium_middleware-0.4.0.tar.gz (14.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

revenium_middleware-0.4.0-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file revenium_middleware-0.4.0.tar.gz.

File metadata

  • Download URL: revenium_middleware-0.4.0.tar.gz
  • Upload date:
  • Size: 14.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for revenium_middleware-0.4.0.tar.gz
Algorithm Hash digest
SHA256 9bbe8be7fa2c99c9e20cf5b6b18e7327d8a4e6113dfb8829e69cdf7e6422ca6d
MD5 92f3772100ea93b22de9e2d5e2255262
BLAKE2b-256 ef0946c7404b64c5583e1669e325648d90ff576e067c56cae07b5471d2118019

See more details on using hashes here.

File details

Details for the file revenium_middleware-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for revenium_middleware-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 977bca5fdde22314a76b220cb7b6b0c3fdd7e854a3700325bcbc856d92524376
MD5 9e08e3f575b2d76047d152a9dfa25d4e
BLAKE2b-256 a2afe3e90ff1207734c093aa41e95fa0bcc2f409d422905de756dc95f6438bb6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page