Skip to main content

A comprehensive Python wrapper for Large Language Models with database integration and usage tracking

Project description

LLM Wrapper

A comprehensive Python wrapper for Large Language Models with database integration and usage tracking. Supports multiple database backends (PostgreSQL, MySQL, MongoDB) and provides detailed analytics for LLM usage.

Features

  • 🚀 Easy Integration: Simple API for interacting with various LLM services
  • 📊 Usage Tracking: Comprehensive logging and analytics for all LLM requests
  • 💾 Multi-Database Support: PostgreSQL, MySQL, and MongoDB backends
  • High Performance: Optimized for concurrent requests and high throughput
  • 🔒 Secure: Built-in security features and API key management
  • 📈 Analytics: Detailed usage statistics and reporting
  • 🐳 Production Ready: Robust error handling and logging

Installation

Basic Installation

pip install llm-wrapper

Development Installation

pip install llm-wrapper[dev]

With All Optional Dependencies

pip install llm-wrapper[dev,test,docs]

Quick Start

Basic Usage

from llm_wrapper import LLMWrapper

# Configure your database
db_config = {
    'type': 'postgresql',
    'dbname': 'llm_wrapper_db',
    'user': 'postgres',
    'password': 'your_password',
    'host': 'localhost',
    'port': '5432'
}

# Initialize the wrapper
wrapper = LLMWrapper(
    service_url="https://your-llm-service.com",
    api_key="your-api-key",
    db_config=db_config,
    deployment_name="gpt-4",
    api_version="2024-12-01-preview",
    default_model='gpt-4'
)

# Send a request
response = wrapper.send_request(
    input_text="What are the benefits of renewable energy?",
    customer_id=1,
    organization_id=1,
    temperature=0.7,
    max_tokens=2000
)

print(f"Response: {response['output_text']}")
print(f"Tokens used: {response['total_tokens']}")

# Get usage statistics
stats = wrapper.get_usage_stats()
print(f"Total requests: {stats['total_requests']}")
print(f"Total tokens: {stats['total_tokens']}")

# Clean up
wrapper.close()

Environment Variables

Create a .env file for easier configuration:

# Database Configuration
DATABASE_TYPE=postgresql
DB_NAME=llm_wrapper_db
DB_USER=postgres
DB_PASSWORD=your_password
DB_HOST=localhost
DB_PORT=5432

# LLM Service Configuration
LLM_SERVICE_URL=https://your-llm-service.com
LLM_API_KEY=your-api-key
LLM_DEPLOYMENT_NAME=gpt-4
LLM_API_VERSION=2024-12-01-preview

Database Configurations

PostgreSQL

db_config = {
    'type': 'postgresql',
    'dbname': 'llm_wrapper_db',
    'user': 'postgres',
    'password': 'your_password',
    'host': 'localhost',
    'port': '5432'
}

MySQL

db_config = {
    'type': 'mysql',
    'dbname': 'llm_wrapper_db',
    'user': 'root',
    'password': 'your_password',
    'host': 'localhost',
    'port': '3306'
}

MongoDB

db_config = {
    'type': 'mongodb',
    'dbname': 'llm_wrapper_db',
    'host': 'localhost',
    'port': 27017,
    'user': 'your_user',
    'password': 'your_password'
}

MongoDB with Connection String

db_config = {
    'type': 'mongodb',
    'dbname': 'llm_wrapper_db',
    'connection_string': 'mongodb://localhost:27017/'
}

Usage Analytics

# Get overall statistics
stats = wrapper.get_usage_stats()

# Get customer-specific statistics
customer_stats = wrapper.get_usage_stats(customer_id=1)

# Get organization-specific statistics
org_stats = wrapper.get_usage_stats(organization_id=1)

# Get statistics for a specific time period
from datetime import datetime, timedelta

start_date = datetime.now() - timedelta(days=7)
end_date = datetime.now()

period_stats = wrapper.get_usage_stats(
    start_date=start_date,
    end_date=end_date
)

Configuration Options

Parameter Type Default Description
service_url str Required LLM service endpoint URL
api_key str Required API key for authentication
db_config dict Required Database configuration
deployment_name str None LLM deployment name
api_version str None API version
default_model str 'gpt-3.5-turbo' Default model to use
timeout int 30 Request timeout in seconds
max_retries int 3 Maximum retry attempts
retry_delay float 1.0 Delay between retries

API Reference

Core Methods

send_request(input_text, customer_id, organization_id, **kwargs)

Send a request to the LLM service.

Parameters:

  • input_text (str): The prompt text
  • customer_id (int): Customer identifier
  • organization_id (int): Organization identifier
  • temperature (float, optional): Sampling temperature (0.0-1.0)
  • max_tokens (int, optional): Maximum tokens in response
  • model (str, optional): Model to use for this request

Returns:

  • dict: Response containing output text, token counts, and metadata

get_usage_stats(**filters)

Get usage statistics with optional filtering.

Parameters:

  • customer_id (int, optional): Filter by customer
  • organization_id (int, optional): Filter by organization
  • start_date (datetime, optional): Start date for filtering
  • end_date (datetime, optional): End date for filtering

Returns:

  • dict: Usage statistics including request counts, token usage, and performance metrics

close()

Close database connections and clean up resources.

Contributing

We welcome contributions! Please see our Contributing Guide for details.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Changelog

See CHANGELOG.md for a list of changes and version history.

Acknowledgments

  • Thanks to all contributors who have helped shape this project
  • Built with love for the AI/ML community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_wrapper_test1-1.0.0.tar.gz (18.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_wrapper_test1-1.0.0-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file llm_wrapper_test1-1.0.0.tar.gz.

File metadata

  • Download URL: llm_wrapper_test1-1.0.0.tar.gz
  • Upload date:
  • Size: 18.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for llm_wrapper_test1-1.0.0.tar.gz
Algorithm Hash digest
SHA256 47227c73d322cdca14605ddf78e0e8bdfca2c0ca407305fc89eeb2a3d079606d
MD5 da45ccf39361ce0c7a4e1beaf102358b
BLAKE2b-256 9e53c3d5948378c022eba6e8f6abdfb82d0e250cea092e33cd7768b41c22965e

See more details on using hashes here.

File details

Details for the file llm_wrapper_test1-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_wrapper_test1-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cab299aa3e55a3387c72eac6e275d826025a5248f46d39d25e0ae47873a3433a
MD5 904aea9023195e1e3de0c6e10492e7af
BLAKE2b-256 60b93b81b80ea7505e5dda7e78a2983689329a555d85a5d17e8e19194550c791

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page