A Python library that meters Anthropic usage to Revenium.
Project description
Revenium Middleware for Anthropic
A production-ready middleware library for metering and monitoring Anthropic API usage in Python applications. Supports both direct Anthropic API and AWS Bedrock with comprehensive streaming functionality. ๐โจ
๐ Features
- ๐ Precise Usage Tracking: Monitor tokens, costs, and request counts for Anthropic chat completions
- ๐ Seamless Integration: Drop-in middleware that works with minimal code changes
- โ๏ธ AWS Bedrock Support: Full integration with automatic detection and metering for Anthropic models via AWS Bedrock
- ๐ Complete Streaming Support: Full streaming functionality for both Anthropic API and AWS Bedrock
- ๐ง Hybrid Initialization: Auto-initialization on import + explicit control for advanced configuration
- โก Thread-Safe: Production-ready with comprehensive thread safety for concurrent applications
- โ๏ธ Flexible Configuration: Customize metering behavior to suit your application needs
๐ฏ What's Supported
| Feature | Direct Anthropic API | AWS Bedrock |
|---|---|---|
| Chat Completion | โ Full support | โ Full support |
| Streaming | โ Full support | โ Full support |
| Token Metering | โ Automatic | โ Automatic |
| Metadata Tracking | โ Full support | โ Full support |
| Thread Safety | โ Production-ready | โ Production-ready |
| Auto-initialization | โ Zero-config | โ Zero-config |
Note: The middleware only wraps messages.create and messages.stream endpoints. Other Anthropic SDK features work normally but aren't metered.
๐ฅ Installation
# Basic installation
pip install revenium-middleware-anthropic
# With AWS Bedrock support
pip install revenium-middleware-anthropic[bedrock]
๐ Quick Start
Zero-Config Integration
Simply set your environment variables and import the middleware. Your Anthropic calls will be metered automatically:
import anthropic
import revenium_middleware_anthropic # Auto-initializes on import
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=100,
messages=[
{
"role": "user",
"content": "What is the meaning of life, the universe and everything?"
}
]
)
print(message.content[0].text)
The middleware automatically intercepts Anthropic API calls and sends metering data to Revenium without requiring any changes to your existing code.
๐ง Hybrid Initialization
The middleware supports both automatic and explicit initialization:
import revenium_middleware_anthropic
# Option 1: Auto-initialization (recommended for most users)
# Just import and use - middleware activates automatically
# Option 2: Explicit control (for advanced configuration)
if not revenium_middleware_anthropic.is_initialized():
success = revenium_middleware_anthropic.initialize()
if not success:
print("Configuration needed")
Enhanced Tracking with Metadata
For more granular usage tracking and detailed reporting, add the usage_metadata parameter:
import anthropic
import revenium_middleware_anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=100,
messages=[
{
"role": "user",
"content": "Explain machine learning briefly."
}
],
usage_metadata={
"trace_id": "conv-28a7e9d4",
"task_type": "summarize-customer-issue",
"subscriber": {
"id": "subscriberid-1234567890",
"email": "user@example.com",
"credential": {
"name": "engineering-api-key",
"value": "sk-ant-api03-..."
}
},
"organization_id": "acme-corp",
"agent": "support-agent",
}
)
print(message.content[0].text)
โ๏ธ AWS Bedrock Integration
This middleware provides complete AWS Bedrock integration with automatic detection and full streaming support, enabling you to meter token usage while routing requests through Amazon's infrastructure.
๐ฆ Installation for Bedrock
To use AWS Bedrock integration, install with the bedrock extra:
pip install revenium-middleware-anthropic[bedrock]
๐ Automatic Provider Detection
The middleware automatically chooses between Bedrock and direct Anthropic API:
| Detection Method | When Used | Example |
|---|---|---|
| AWS Credentials | When AWS credentials are configured and accessible | aws configure or IAM roles |
| Base URL Detection | When base_url contains amazonaws.com |
Custom Bedrock endpoints |
| Environment Variables | When AWS environment variables are set | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY |
| Default | When none of the above apply | Standard Anthropic API |
Key Point: The middleware defaults to direct Anthropic API for safety. Bedrock is only used when explicitly configured or detected.
๐ก Quick Start Examples
Basic Usage (Direct Anthropic API)
import anthropic
import revenium_middleware_anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-3-haiku-20240307",
messages=[{"role": "user", "content": "Hello!"}],
max_tokens=50
)
# Automatically metered with provider="ANTHROPIC"
AWS Bedrock (Automatic Detection)
# Configure AWS credentials first (aws configure, IAM roles, etc.)
import anthropic
import revenium_middleware_anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-3-haiku-20240307", # Auto-maps to Bedrock model
messages=[{"role": "user", "content": "Hello from Bedrock!"}],
max_tokens=50
)
# Automatically metered with provider="AWS" when Bedrock is detected
Bedrock with Explicit Base URL
import anthropic
import revenium_middleware_anthropic
# Force Bedrock by specifying base_url
client = anthropic.Anthropic(
base_url="https://bedrock-runtime.us-east-1.amazonaws.com"
)
response = client.messages.create(
model="claude-3-haiku-20240307",
messages=[{"role": "user", "content": "What is AWS Bedrock?"}],
max_tokens=100
)
# Guaranteed to use Bedrock with provider="AWS"
๐ก See the examples/ directory for comprehensive examples:
examples/anthropic-basic.py- Simple zero-config usage (direct Anthropic API)examples/anthropic-advanced.py- Production-ready with metadata (direct Anthropic API)examples/anthropic-streaming.py- Streaming functionality (direct Anthropic API)examples/anthropic-bedrock.py- Complete AWS Bedrock integration (all examples via Bedrock)
โ๏ธ Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
REVENIUM_METERING_API_KEY |
Your Revenium API key | Required |
REVENIUM_METERING_BASE_URL |
Revenium API endpoint | Required |
AWS_REGION |
AWS region for Bedrock | us-east-1 |
REVENIUM_BEDROCK_DISABLE |
Set to 1 to disable Bedrock support |
Not set |
AWS Authentication
The middleware uses the standard AWS credential chain:
- Environment variables (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY) - AWS credentials file (
~/.aws/credentials) - IAM roles (for EC2/Lambda/ECS)
- AWS SSO
Required AWS permissions:
bedrock:InvokeModel(for non-streaming requests)bedrock:InvokeModelWithResponseStream(for streaming requests)
๐ Supported Models
The middleware automatically maps Anthropic model names to Bedrock model IDs:
| Anthropic Model | Bedrock Model ID |
|---|---|
claude-3-opus-20240229 |
anthropic.claude-3-opus-20240229-v1:0 |
claude-3-sonnet-20240229 |
anthropic.claude-3-sonnet-20240229-v1:0 |
claude-3-haiku-20240307 |
us.anthropic.claude-3-5-haiku-20241022-v1:0 |
claude-3-5-sonnet-20240620 |
anthropic.claude-3-5-sonnet-20240620-v1:0 |
claude-3-5-sonnet-20241022 |
anthropic.claude-3-5-sonnet-20241022-v2:0 |
claude-3-5-haiku-20241022 |
anthropic.claude-3-5-haiku-20241022-v1:0 |
For other models, the middleware uses the format anthropic.{model_name}.
๐ Streaming Support
The middleware provides complete streaming support for both direct Anthropic API and AWS Bedrock with identical interfaces and automatic provider detection.
Features
- Universal Interface: Same code works with both Anthropic API and AWS Bedrock
- Automatic Detection: Provider routing happens transparently
- Complete Token Tracking: Accurate token counting and metering for streaming responses
- Thread-Safe: Production-ready concurrent streaming support
- Graceful Fallback: Automatic fallback to direct API if Bedrock streaming fails
Basic Streaming Example
import anthropic
import revenium_middleware_anthropic
client = anthropic.Anthropic() # Auto-detects provider
# Streaming works identically with both providers
with client.messages.stream(
model="claude-3-haiku-20240307",
messages=[{"role": "user", "content": "Count from 1 to 5"}],
max_tokens=50,
usage_metadata={
"trace_id": "streaming-demo-001",
"task_type": "streaming-chat",
"organization_id": "my-org"
}
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
# Get final usage information
final_message = stream.get_final_message()
print(f"\nTokens: {final_message.usage.input_tokens} + {final_message.usage.output_tokens}")
Bedrock Streaming Example
import anthropic
import revenium_middleware_anthropic
# Force Bedrock by specifying base_url
client = anthropic.Anthropic(
base_url="https://bedrock-runtime.us-east-1.amazonaws.com"
)
with client.messages.stream(
model="claude-3-haiku-20240307",
messages=[{"role": "user", "content": "Write a haiku about streaming data"}],
max_tokens=100,
usage_metadata={
"trace_id": "bedrock-stream-001",
"task_type": "bedrock-streaming",
"organization_id": "aws-demo"
}
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
print("\nโ
Bedrock streaming completed with automatic token metering!")
View the examples directory for more code samples for both streaming and non-streaming AI calls.
๐ท๏ธ Metadata Fields
The usage_metadata parameter supports the following fields for detailed tracking:
| Field | Description | Use Case |
|---|---|---|
trace_id |
Unique identifier for a conversation or session | Group multi-turn conversations for performance & cost tracking |
task_type |
Classification of the AI operation by type of work | Track cost & performance by purpose (e.g., classification, summarization) |
subscriber |
Object containing subscriber information | Track cost & performance by individual users and their credentials |
subscriber.id |
The id of the subscriber from non-Revenium systems | Track cost & performance by individual users (if customers are anonymous or tracking by emails is not desired) |
subscriber.email |
The email address of the subscriber | Track cost & performance by individual users (if customer e-mail addresses are known) |
subscriber.credential |
Object containing credential information | Track cost & performance by API keys and credentials |
subscriber.credential.name |
An alias for an API key used by one or more users | Track cost & performance by individual API keys |
subscriber.credential.value |
The key value associated with the subscriber (i.e an API key) | Track cost & performance by API key value (normally used when the only identifier for a user is an API key) |
organization_id |
Customer or department ID from non-Revenium systems | Track cost & performance by customers or business units |
subscription_id |
Reference to a billing plan in non-Revenium systems | Track cost & performance by a specific subscription |
product_id |
Your product or feature making the AI call | Track cost & performance across different products |
agent |
Identifier for the specific AI agent | Track cost & performance by AI agent |
response_quality_score |
The quality of the AI response (0..1) | Track AI response quality |
All metadata fields are optional. Adding them enables more detailed reporting and analytics in Revenium.
๐ Examples
The examples/ directory contains practical demonstrations of all middleware features:
| Example | Description | Key Features |
|---|---|---|
anthropic-basic.py |
Zero-config setup | Auto-initialization, basic metering (direct API) |
anthropic-advanced.py |
Production template | Custom metadata, detailed tracking (direct API) |
anthropic-streaming.py |
Streaming responses | Real-time streaming, token tracking (direct API) |
anthropic-bedrock.py |
AWS Bedrock integration | All features via Bedrock: chat, metadata, streaming |
Each example includes:
- โ Working code that you can run immediately
- โ
Environment setup with
.envfile loading - โ Error handling and graceful fallbacks
- โ Detailed output showing what gets tracked
- โ Comments explaining key concepts
Quick start: python examples/anthropic-basic.py
๐ง Troubleshooting
Common Issues
| Issue | Solution |
|---|---|
| "No module named 'boto3'" | Install with Bedrock support: pip install revenium-middleware-anthropic[bedrock] |
| Requests go to Anthropic instead of Bedrock | Verify AWS credentials: aws sts get-caller-identity |
| "AccessDenied" errors | Ensure AWS credentials have bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream permissions |
| Model not available | Check if Claude models are available in your AWS region |
| Middleware not working | Verify REVENIUM_METERING_API_KEY and REVENIUM_METERING_BASE_URL are set |
| Streaming errors | Check AWS credentials; middleware automatically falls back to direct API |
Debug Mode
Enable debug logging to see provider detection and routing decisions:
export REVENIUM_LOG_LEVEL=DEBUG
python your_script.py
Force Direct Anthropic API
To disable Bedrock detection temporarily:
export REVENIUM_BEDROCK_DISABLE=1
python your_script.py
Check Initialization Status
import revenium_middleware_anthropic
if revenium_middleware_anthropic.is_initialized():
print("โ
Middleware is ready")
else:
print("โ ๏ธ Middleware needs configuration")
๐ Compatibility
- ๐ Python 3.8+
- ๐ค Anthropic Python SDK (latest version recommended)
- โ๏ธ AWS Bedrock (with
boto3>=1.34.0when using[bedrock]extra) - โก Thread-Safe (production-ready for concurrent applications)
๐ Logging
Control logging with the REVENIUM_LOG_LEVEL environment variable:
# Enable debug logging
export REVENIUM_LOG_LEVEL=DEBUG
python your_script.py
# Or inline
REVENIUM_LOG_LEVEL=DEBUG python your_script.py
Available log levels:
DEBUG: Detailed debugging information (provider detection, routing decisions)INFO: General information (default)WARNING: Warning messages onlyERROR: Error messages only
Contributing
See CONTRIBUTING.md
Security
See SECURITY.md
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Built by the Revenium team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file revenium_middleware_anthropic-0.2.24.tar.gz.
File metadata
- Download URL: revenium_middleware_anthropic-0.2.24.tar.gz
- Upload date:
- Size: 37.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e36011d2d9560106d5ce2beef743b6636a802a92f4ac06faffcbd04193bb136
|
|
| MD5 |
d7794539a6cb3eda9168c48f258b4185
|
|
| BLAKE2b-256 |
3c8e07617d21e770b70cc658560668e7320822d00656f23f95d5e0718b61a76a
|
File details
Details for the file revenium_middleware_anthropic-0.2.24-py3-none-any.whl.
File metadata
- Download URL: revenium_middleware_anthropic-0.2.24-py3-none-any.whl
- Upload date:
- Size: 24.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b5e2bbcc6265b9cb22ab4d4ca3b3e16e17e6a3579821823262144fb397a6be7
|
|
| MD5 |
b08f5752681317271ffb4bf899d1e10c
|
|
| BLAKE2b-256 |
ef908379a1c642d58add14a51955e2331ecb2327f13254cbe30222249709223d
|