Skip to main content

A Python library that meters OpenAI usage to Revenium.

Project description

🤖 Revenium Middleware for OpenAI

PyPI version Python Versions Documentation Status License: Apache 2.0

A middleware library for metering and monitoring OpenAI API usage in Python applications. 🐍✨

✨ Features

  • 📊 Precise Usage Tracking: Monitor tokens, costs, and request counts across all OpenAI API endpoints
  • 🔌 Seamless Integration: Drop-in middleware that works with minimal code changes
  • ⚙️ Flexible Configuration: Customize metering behavior to suit your application needs

📥 Installation

pip install revenium-middleware-openai

🔧 Usage

🔄 Zero-Config Integration

Simply export your REVENIUM_METERING_API_KEY and import the middleware. Your OpenAI calls will be metered automatically:

import openai
import revenium_middleware_openai

# Ensure REVENIUM_METERING_API_KEY environment variable is set

response = openai.chat.completions.create(
    model="gpt-4",  
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": "What is the answer to life, the universe and everything?",
        },
    ],
    max_tokens=500,
)

print(response.choices[0].message.content)

The middleware automatically intercepts OpenAI API calls and sends metering data to Revenium without requiring any changes to your existing code. Make sure to set the REVENIUM_METERING_API_KEY environment variable for authentication with the Revenium service.

📈 Enhanced Tracking with Metadata

For more granular usage tracking and detailed reporting, add the usage_metadata parameter:

import openai
import revenium_middleware_openai

response = openai.chat.completions.create(
    model="gpt-4",  # You can change this to other models like "gpt-3.5-turbo"
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": "What is the meaning of life, the universe and everything?",
        },
    ],
    max_tokens=500,
    usage_metadata={
        "trace_id": "conv-28a7e9d4-1c3b-4e5f-8a9b-7d1e3f2c1b4a",
        "task_id": "chat-summary-af23c910",
        "task_type": "text-classification",
        "subscriber_identity": "customer-email@example.com",
        "organization_id": "acme-corporation-12345",
        "subscription_id": "startup-plan-quarterly-2025-Q1",
        "product_id": "intelligent-document-processor-v3",
        "source_id": "mobile-app-ios-v4.2",
        "ai_provider_key_name": "openai-production-key1",
        "agent": "customer-support-assistant-v2",
    },
)
print(response.choices[0].message.content)

🏷️ Metadata Fields

The usage_metadata parameter supports the following fields:

Field Description Use Case
trace_id Unique identifier for a conversation or session Track multi-turn conversations
task_id Identifier for a specific AI task Group related API calls for a single task
task_type Classification of the AI operation Categorize usage by purpose (e.g., classification, summarization)
subscriber_identity End-user identifier Track usage by individual users
organization_id Customer or department identifier Allocate costs to business units
subscription_id Reference to a billing plan Associate usage with specific subscriptions
product_id The product or feature using AI Track usage across different products
source_id Origin of the request Monitor usage by platform or app version
ai_provider_key_name Identifier for the API key used Track usage by different API keys
agent Identifier for the specific AI agent Compare performance across different AI agents

All metadata fields are optional. Adding them enables more detailed reporting and analytics in Revenium.

🔄 Compatibility

  • 🐍 Python 3.8+
  • 🤖 OpenAI Python SDK 1.0.0+
  • 🌐 Works with all OpenAI models and endpoints

🔍 Logging

This module uses Python's standard logging system. You can control the log level by setting the REVENIUM_LOG_LEVEL environment variable:

# Enable debug logging
export REVENIUM_LOG_LEVEL=DEBUG

# Or when running your script
REVENIUM_LOG_LEVEL=DEBUG python your_script.py

Available log levels:

  • DEBUG: Detailed debugging information
  • INFO: General information (default)
  • WARNING: Warning messages only
  • ERROR: Error messages only
  • CRITICAL: Critical error messages only

📚 Documentation

Full documentation is available at https://revenium-middleware-openai.readthedocs.io/

👥 Contributing

Contributions are welcome! Please check out our contributing guidelines for details.

  1. 🍴 Fork the repository
  2. 🌿 Create your feature branch (git checkout -b feature/amazing-feature)
  3. 💾 Commit your changes (git commit -m 'Add some amazing feature')
  4. 🚀 Push to the branch (git push origin feature/amazing-feature)
  5. 🔍 Open a Pull Request

📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

🙏 Acknowledgments

  • 🔥 Thanks to the OpenAI team for creating an excellent API
  • 💖 Built with ❤️ by the Revenium team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

revenium_middleware_openai-0.2.5.tar.gz (16.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

revenium_middleware_openai-0.2.5-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file revenium_middleware_openai-0.2.5.tar.gz.

File metadata

File hashes

Hashes for revenium_middleware_openai-0.2.5.tar.gz
Algorithm Hash digest
SHA256 98216ae5ee8554a5144b207d401e6c1b09569ec1daec38df1f52ed574cbb69bd
MD5 a97348320f3075f8043734836fe5bc4a
BLAKE2b-256 3bf48ead2cf44034a9b0304d210fe9f9557371537d70334166f0d32701a5b8b0

See more details on using hashes here.

File details

Details for the file revenium_middleware_openai-0.2.5-py3-none-any.whl.

File metadata

File hashes

Hashes for revenium_middleware_openai-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 442c46681cc956ca0abf6e40d0fb16646604dda9f94391dad5a705cf5698213b
MD5 1e6d215dbe45486977332c18dc7435ca
BLAKE2b-256 aac47d2afc7cefcd84371832f6ccf2a116d511abdadd976213a036ec926336fe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page