Skip to main content

Unified interface for multiple LLM providers

Project description

Here's a comprehensive README.md file for your LLM Providers library:

LLM Providers

PyPI Version Python Version License Tests Documentation

A unified Python library for interacting with multiple Large Language Model (LLM) providers with a consistent interface.

Features

  • Multi-Provider Support: OpenAI, Claude, Gemini, Groq, DeepSeek, HuggingFace (API and local)
  • Unified Interface: Consistent API across all providers
  • Async Support: Full async/await implementation
  • Streaming: Real-time text generation
  • Caching: Disk-based caching with TTL
  • Rate Limiting: Built-in rate limiting
  • Error Handling: Comprehensive error handling and retries
  • Type Safety: Pydantic models for input validation
  • Extensible: Easy to add new providers

Installation

pip install llm-providers

Supported Providers

Provider API Support Local Models Streaming Async
OpenAI
Claude
Gemini
Groq
DeepSeek
HuggingFace

Quick Start

Basic Usage

from llmfusion import get_provider, LLMConfig, LLMInput

# Configure OpenAI
openai_config = LLMConfig(
    api_key="your-openai-key",
    model_name="gpt-4"
)

# Create client
openai = get_provider("openai", openai_config)

# Create input
input = LLMInput(
    prompt="Explain quantum computing",
    system_prompt="You are a physics expert",
    temperature=0.7,
    max_tokens=500
)

# Generate response
response = openai.generate(input)
print(response)

Async Usage

import asyncio
from llmfusion import get_provider, LLMConfig, LLMInput


async def main():
    config = LLMConfig(
        api_key="your-api-key",
        model_name="claude-3"
    )
    claude = get_provider("claude", config)

    input = LLMInput(prompt="Write a poem about AI")
    response = await claude.agenerate(input)
    print(response)


asyncio.run(main())

Streaming

from llmfusion import get_provider, LLMConfig, LLMInput

config = LLMConfig(
    api_key="your-api-key",
    model_name="gpt-4"
)
gpt = get_provider("openai", config)

input = LLMInput(prompt="Explain blockchain technology")
for chunk in gpt.stream(input):
    print(chunk, end="", flush=True)

Configuration

LLMConfig Parameters

Parameter Type Default Description
api_key str None Provider API key
model_name str Required Model name/identifier
base_url str Provider default Custom API endpoint
timeout int 30 Request timeout in seconds
max_retries int 3 Maximum retry attempts
cache_ttl int 3600 Cache time-to-live in seconds
rate_limit_rpm int 1000 Requests per minute limit
device str "auto" Device for local models (cpu/cuda)

Advanced Usage

Custom Cache Directory

from llmfusion import set_cache_dir

set_cache_dir("/path/to/cache")

Disable Caching

config = LLMConfig(
    api_key="your-key",
    model_name="gpt-4",
    cache_ttl=0  # Disable caching
)

Using Local HuggingFace Models

config = LLMConfig(
    model_name="mistralai/Mistral-7B-Instruct-v0.1"
)
hf = get_provider("huggingface", config)

Error Handling

The library provides comprehensive error handling with custom exceptions:

from llmfusion import (
    LLMError,
    RateLimitError,
    AuthenticationError,
    ProviderError
)

try:
    response = client.generate(input)
except RateLimitError as e:
    print("Rate limit exceeded:", str(e))
except AuthenticationError as e:
    print("Authentication failed:", str(e))
except ProviderError as e:
    print("Provider error:", str(e))
except LLMError as e:
    print("General error:", str(e))

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Documentation

Full documentation is available at https://github.com/mbenhaddou/llm_providers

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

  • Inspired by the need for a unified LLM interface
  • Built with ❤️ by Mohamed Ben Haddou

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmfusion-0.2.5.tar.gz (17.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmfusion-0.2.5-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

File details

Details for the file llmfusion-0.2.5.tar.gz.

File metadata

  • Download URL: llmfusion-0.2.5.tar.gz
  • Upload date:
  • Size: 17.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for llmfusion-0.2.5.tar.gz
Algorithm Hash digest
SHA256 45664cb49b90734d47f706d21d74b558a2bde7e83f723fa0b5d7e5d101833b8a
MD5 17ce7386c6807e15f29afa73776c60ff
BLAKE2b-256 c9b5ec113c36235949a7979ed2839f901c6b3c34757a255791712b43c188aeeb

See more details on using hashes here.

File details

Details for the file llmfusion-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: llmfusion-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 22.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for llmfusion-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 90b68cf195f889b4f9ff13dd3a318159fa3d45995b2e78aea92975ec501b6d28
MD5 f254c98de9b9534af73ee82edc9e233c
BLAKE2b-256 4e4ec8d5dc8598a5765f4383803bc20f217b5400d11b7b678de1dc7a4039b41d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page