Skip to main content

Unified interface for multiple LLM providers

Project description

Here's a comprehensive README.md file for your LLM Providers library:

LLM Providers

PyPI Version Python Version License Tests Documentation

A unified Python library for interacting with multiple Large Language Model (LLM) providers with a consistent interface.

Features

  • Multi-Provider Support: OpenAI, Claude, Gemini, Groq, DeepSeek, HuggingFace (API and local)
  • Unified Interface: Consistent API across all providers
  • Async Support: Full async/await implementation
  • Streaming: Real-time text generation
  • Caching: Disk-based caching with TTL
  • Rate Limiting: Built-in rate limiting
  • Error Handling: Comprehensive error handling and retries
  • Type Safety: Pydantic models for input validation
  • Extensible: Easy to add new providers

Installation

pip install llm-providers

Supported Providers

Provider API Support Local Models Streaming Async
OpenAI
Claude
Gemini
Groq
DeepSeek
HuggingFace

Quick Start

Basic Usage

from llmfusion import get_provider, LLMConfig, LLMInput

# Configure OpenAI
openai_config = LLMConfig(
    api_key="your-openai-key",
    model_name="gpt-4"
)

# Create client
openai = get_provider("openai", openai_config)

# Create input
input = LLMInput(
    prompt="Explain quantum computing",
    system_prompt="You are a physics expert",
    temperature=0.7,
    max_tokens=500
)

# Generate response
response = openai.generate(input)
print(response)

Async Usage

import asyncio
from llmfusion import get_provider, LLMConfig, LLMInput


async def main():
    config = LLMConfig(
        api_key="your-api-key",
        model_name="claude-3"
    )
    claude = get_provider("claude", config)

    input = LLMInput(prompt="Write a poem about AI")
    response = await claude.agenerate(input)
    print(response)


asyncio.run(main())

Streaming

from llmfusion import get_provider, LLMConfig, LLMInput

config = LLMConfig(
    api_key="your-api-key",
    model_name="gpt-4"
)
gpt = get_provider("openai", config)

input = LLMInput(prompt="Explain blockchain technology")
for chunk in gpt.stream(input):
    print(chunk, end="", flush=True)

Configuration

LLMConfig Parameters

Parameter Type Default Description
api_key str None Provider API key
model_name str Required Model name/identifier
base_url str Provider default Custom API endpoint
timeout int 30 Request timeout in seconds
max_retries int 3 Maximum retry attempts
cache_ttl int 3600 Cache time-to-live in seconds
rate_limit_rpm int 1000 Requests per minute limit
device str "auto" Device for local models (cpu/cuda)

Advanced Usage

Custom Cache Directory

from llmfusion import set_cache_dir

set_cache_dir("/path/to/cache")

Disable Caching

config = LLMConfig(
    api_key="your-key",
    model_name="gpt-4",
    cache_ttl=0  # Disable caching
)

Using Local HuggingFace Models

config = LLMConfig(
    model_name="mistralai/Mistral-7B-Instruct-v0.1"
)
hf = get_provider("huggingface", config)

Error Handling

The library provides comprehensive error handling with custom exceptions:

from llmfusion import (
    LLMError,
    RateLimitError,
    AuthenticationError,
    ProviderError
)

try:
    response = client.generate(input)
except RateLimitError as e:
    print("Rate limit exceeded:", str(e))
except AuthenticationError as e:
    print("Authentication failed:", str(e))
except ProviderError as e:
    print("Provider error:", str(e))
except LLMError as e:
    print("General error:", str(e))

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Documentation

Full documentation is available at https://github.com/mbenhaddou/llm_providers

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

  • Inspired by the need for a unified LLM interface
  • Built with ❤️ by Mohamed Ben Haddou

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmfusion-0.2.1.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmfusion-0.2.1-py3-none-any.whl (21.0 kB view details)

Uploaded Python 3

File details

Details for the file llmfusion-0.2.1.tar.gz.

File metadata

  • Download URL: llmfusion-0.2.1.tar.gz
  • Upload date:
  • Size: 17.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for llmfusion-0.2.1.tar.gz
Algorithm Hash digest
SHA256 ddc193dc42cf49c9c4952348f982f26ca96b3255dc7d6be9357f07eb0c6c4c20
MD5 e4e2741f640da61a32b989895764514b
BLAKE2b-256 39b6a0a76e09dab00a621dc091adb60cf760230260463186c91abbfc7e80310a

See more details on using hashes here.

File details

Details for the file llmfusion-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: llmfusion-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 21.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for llmfusion-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4c617d4da6a485cfdd4c4749b45dba6c02482bbe4be8ab39ba34b0bbaac3a8c5
MD5 d88629ecf15d5927d12f93c2c78392ef
BLAKE2b-256 118fee182080c3716b67c472932192bab0b786edd1a9d9c8a337dda3b92f27fb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page