Skip to main content

A Python library that meters Ollama usage to Revenium.

Project description

Revenium Middleware for Ollama

PyPI version Python Versions Documentation License: MIT

A middleware library for metering and monitoring Ollama API usage in Python applications.

Features

  • Precise Usage Tracking: Monitor tokens, costs, and request counts across all Ollama API endpoints
  • Seamless Integration: Drop-in middleware that works with minimal code changes
  • Flexible Configuration: Customize metering behavior to suit your application needs
  • Rich Metadata Support: Track usage by subscriber, organization, task type, and more

Installation

pip install revenium-middleware-ollama

Usage

Run Your First Example

Run the getting started example:

python examples/getting_started.py

For more examples, see examples/README.md.

Zero-Config Integration

Simply export your REVENIUM_METERING_API_KEY and import the middleware. Your Ollama calls will be metered automatically:

import ollama
import revenium_middleware_ollama

# Ensure REVENIUM_METERING_API_KEY environment variable is set

response: ollama.ChatResponse = ollama.chat(
    model='qwen2.5:0.5b', messages=[
        {
            'role': 'user',
            'content': 'Why is the sky blue?',
        },
    ])
print(response['message']['content'])

The middleware automatically intercepts Ollama API calls and sends metering data to Revenium without requiring any changes to your existing code. Make sure to set the REVENIUM_METERING_API_KEY environment variable for authentication with the Revenium service.

Enhanced Tracking with Metadata

For more granular usage tracking and detailed reporting, add the usage_metadata parameter:

import ollama
import revenium_middleware_ollama

response = ollama.chat(
    model='qwen2.5:0.5b', messages=[
        {
            'role': 'user',
            'content': 'Why is the sky blue?',
        },
    ],
    usage_metadata={
         "trace_id": "conv-28a7e9d4",
         "task_type": "summarize-customer-issue",
         "subscriber": {
             "id": "subscriberid-1234567890",
             "email": "user@example.com",
             "credential": {
                 "name": "engineering-api-key",
                 "value": "sk-1234567890abcdef"
             }
         },
         "organization_id": "acme-corp",
         "subscription_id": "startup-plan-Q1",
         "product_id": "saas-app-gold-tier",
         "agent": "support-agent",
    },
)
print(response['message']['content'])

OpenAI Compatibility Mode

The middleware can also be used with Ollama's OpenAI compatibility mode.

import openai
import revenium_middleware_openai

openai.api_key = 'ollama'
openai.base_url = 'http://localhost:11434/v1/'
question = "Why is the sky blue?"

response = openai.chat.completions.create(
    model="gemma2:2b",
    messages=[
       {"role": "system", "content": "You are a helpful assistant."},
       {"role": "user", "content": question}
    ],
    usage_metadata={
         "trace_id": "conv-28a7e9d4",
         "task_type": "summarize-customer-issue",
         "subscriber": {
             "id": "subscriberid-1234567890",
             "email": "user@example.com",
             "credential": {
                 "name": "engineering-api-key",
                 "value": "sk-1234567890abcdef"
             }
         },
         "organization_id": "acme-corp",
         "subscription_id": "startup-plan-Q1",
         "product_id": "saas-app-gold-tier",
         "agent": "support-agent",
    }
)

print(response)

Metadata Fields

The usage_metadata parameter supports the following fields:

Field Description Use Case
trace_id Unique identifier for a conversation or session Group multi-turn conversations into single event for performance & cost tracking
task_type Classification of the AI operation by type of work Track cost & performance by purpose (e.g., classification, summarization)
subscriber Object containing subscriber information Track cost & performance by individual users or API keys
subscriber.id The id of the subscriber from non-Revenium systems Track cost & performance by individual users (if customers are anonymous or tracking by emails is not desired)
subscriber.email The email address of the subscriber Track cost & performance by individual users (if customer e-mail addresses are known)
subscriber.credential Object containing credential information Track cost & performance by API key
subscriber.credential.name An alias for an API key used by one or more users Track cost & performance by individual API keys
subscriber.credential.value The key value associated with the subscriber (i.e an API key) Track cost & performance by API key value (normally used when the only identifier for a user is an API key)
organization_id Customer or department ID from non-Revenium systems Track cost & performance by customers or business units
subscription_id Reference to a billing plan in non-Revenium systems Track cost & performance by a specific subscription
product_id Your product or feature making the AI call Track cost & performance across different products
agent Identifier for the specific AI agent Track cost & performance performance by AI agent
response_quality_score Custom quality rating for the AI response (0.0-1.0 scale) Track user satisfaction or automated quality metrics (e.g., RAGAS, human feedback) for model performance analysis

All metadata fields are optional. Adding them enables more detailed reporting and analytics in Revenium.

Response Attributes

Response objects include a _revenium_transaction_id attribute for correlating requests with Revenium metering records:

response = ollama.chat(
    model='qwen2.5:0.5b',
    messages=[{'role': 'user', 'content': 'Hello!'}]
)

# Access transaction ID if needed for debugging/correlation
transaction_id = response._revenium_transaction_id

Configuration

Configuration Variables

Variable Required Description
REVENIUM_METERING_API_KEY Yes Your Revenium API key for authentication with the metering service
REVENIUM_METERING_BASE_URL No Revenium API base URL. Defaults to https://api.revenium.ai
REVENIUM_LOG_LEVEL No Log level for middleware output. Options: DEBUG, INFO (default), WARNING, ERROR, CRITICAL

Environment Setup Examples

Using a .env file (Recommended):

First, copy the example file:

cp .env.example .env

Then edit .env with your actual API key:

# .env
REVENIUM_METERING_API_KEY=hak_your_api_key_here
REVENIUM_METERING_BASE_URL=https://api.revenium.ai
REVENIUM_LOG_LEVEL=INFO

⚠️ Security Note: Never commit your .env file to version control. It's already included in .gitignore.

Then load it in your Python code:

from dotenv import load_dotenv
load_dotenv()

import ollama
import revenium_middleware_ollama

# Your Ollama calls will now be metered

Compatibility

  • Python 3.8+
  • Ollama Python SDK 1.0.0+

Supported Models

This middleware works with any Ollama model. Examples in this package use:

  • qwen2.5:0.5b, qwen2.5:1.5b (Qwen models)
  • llama3.1, llama3.2 (Llama models)
  • gemma2, codellama (Other popular models)

For the complete list of available models, see the Ollama Model Library.

For cost tracking across providers, see the Revenium Model Catalog.

Logging

This module uses Python's standard logging system. You can control the log level by setting the REVENIUM_LOG_LEVEL environment variable:

# Enable debug logging
export REVENIUM_LOG_LEVEL=DEBUG

# Or when running your script
REVENIUM_LOG_LEVEL=DEBUG python your_script.py

Available log levels:

  • DEBUG: Detailed debugging information
  • INFO: General information (default)
  • WARNING: Warning messages only
  • ERROR: Error messages only
  • CRITICAL: Critical error messages only

Documentation

For detailed documentation, visit docs.revenium.io

Contributing

See CONTRIBUTING.md

Code of Conduct

See CODE_OF_CONDUCT.md

Security

See SECURITY.md

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built by the Revenium team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

revenium_middleware_ollama-0.1.17.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

revenium_middleware_ollama-0.1.17-py3-none-any.whl (15.5 kB view details)

Uploaded Python 3

File details

Details for the file revenium_middleware_ollama-0.1.17.tar.gz.

File metadata

File hashes

Hashes for revenium_middleware_ollama-0.1.17.tar.gz
Algorithm Hash digest
SHA256 1125e383d08304ecc85df00fbb6d94d1143a78a2859a3b0808f3b95329bdcc15
MD5 3ff77e8539add6023e194694105c3f6e
BLAKE2b-256 63218fbb27f0f829811ff6e22e6cb089e414ca9e77b323e89ed2aea701330628

See more details on using hashes here.

File details

Details for the file revenium_middleware_ollama-0.1.17-py3-none-any.whl.

File metadata

File hashes

Hashes for revenium_middleware_ollama-0.1.17-py3-none-any.whl
Algorithm Hash digest
SHA256 c78e968a0818cccbc5c00554970071057bc0d99fd4ec810ff544f18b65f778b4
MD5 3b74ff8545abf35734379ef5f4f97d41
BLAKE2b-256 02b6ed46959b199535c46f9be9a091244e679eb37363140d545a1573b63dda3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page