Skip to main content

Unified interface for multiple LLM providers

Project description

Here's a comprehensive README.md file for your LLM Providers library:

LLM Providers

PyPI Version Python Version License Tests Documentation

A unified Python library for interacting with multiple Large Language Model (LLM) providers with a consistent interface.

Features

  • Multi-Provider Support: OpenAI, Claude, Gemini, Groq, DeepSeek, HuggingFace (API and local)
  • Unified Interface: Consistent API across all providers
  • Async Support: Full async/await implementation
  • Streaming: Real-time text generation
  • Caching: Disk-based caching with TTL
  • Rate Limiting: Built-in rate limiting
  • Error Handling: Comprehensive error handling and retries
  • Type Safety: Pydantic models for input validation
  • Extensible: Easy to add new providers

Installation

pip install llm-providers

Supported Providers

Provider API Support Local Models Streaming Async
OpenAI
Claude
Gemini
Groq
DeepSeek
HuggingFace

Quick Start

Basic Usage

from llmfusion import get_provider, LLMConfig, LLMInput

# Configure OpenAI
openai_config = LLMConfig(
    api_key="your-openai-key",
    model_name="gpt-4"
)

# Create client
openai = get_provider("openai", openai_config)

# Create input
input = LLMInput(
    prompt="Explain quantum computing",
    system_prompt="You are a physics expert",
    temperature=0.7,
    max_tokens=500
)

# Generate response
response = openai.generate(input)
print(response)

Async Usage

import asyncio
from llmfusion import get_provider, LLMConfig, LLMInput


async def main():
    config = LLMConfig(
        api_key="your-api-key",
        model_name="claude-3"
    )
    claude = get_provider("claude", config)

    input = LLMInput(prompt="Write a poem about AI")
    response = await claude.agenerate(input)
    print(response)


asyncio.run(main())

Streaming

from llmfusion import get_provider, LLMConfig, LLMInput

config = LLMConfig(
    api_key="your-api-key",
    model_name="gpt-4"
)
gpt = get_provider("openai", config)

input = LLMInput(prompt="Explain blockchain technology")
for chunk in gpt.stream(input):
    print(chunk, end="", flush=True)

Configuration

LLMConfig Parameters

Parameter Type Default Description
api_key str None Provider API key
model_name str Required Model name/identifier
base_url str Provider default Custom API endpoint
timeout int 30 Request timeout in seconds
max_retries int 3 Maximum retry attempts
cache_ttl int 3600 Cache time-to-live in seconds
rate_limit_rpm int 1000 Requests per minute limit
device str "auto" Device for local models (cpu/cuda)

Advanced Usage

Custom Cache Directory

from llmfusion import set_cache_dir

set_cache_dir("/path/to/cache")

Disable Caching

config = LLMConfig(
    api_key="your-key",
    model_name="gpt-4",
    cache_ttl=0  # Disable caching
)

Using Local HuggingFace Models

config = LLMConfig(
    model_name="mistralai/Mistral-7B-Instruct-v0.1"
)
hf = get_provider("huggingface", config)

Error Handling

The library provides comprehensive error handling with custom exceptions:

from llmfusion import (
    LLMError,
    RateLimitError,
    AuthenticationError,
    ProviderError
)

try:
    response = client.generate(input)
except RateLimitError as e:
    print("Rate limit exceeded:", str(e))
except AuthenticationError as e:
    print("Authentication failed:", str(e))
except ProviderError as e:
    print("Provider error:", str(e))
except LLMError as e:
    print("General error:", str(e))

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Documentation

Full documentation is available at https://github.com/mbenhaddou/llm_providers

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

  • Inspired by the need for a unified LLM interface
  • Built with ❤️ by Mohamed Ben Haddou

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmfusion-0.2.4.tar.gz (18.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmfusion-0.2.4-py3-none-any.whl (22.4 kB view details)

Uploaded Python 3

File details

Details for the file llmfusion-0.2.4.tar.gz.

File metadata

  • Download URL: llmfusion-0.2.4.tar.gz
  • Upload date:
  • Size: 18.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for llmfusion-0.2.4.tar.gz
Algorithm Hash digest
SHA256 5439f568703ea7a8bea77949fabbefd08e0fbd5f74f69cdf94d311686e056eff
MD5 41d99140d7c025e0b7a652c55fa2a4bb
BLAKE2b-256 7952a470c4e0f48077db32b7dfc097741851d8013e17fc4cd3e96ed51a2aa05f

See more details on using hashes here.

File details

Details for the file llmfusion-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: llmfusion-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 22.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for llmfusion-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 16ddab8d1238e465f693500e38be96f5d23648c2a1584f2c9713406e99faa6ab
MD5 49a306a89dcbfb7757a62774c2780f69
BLAKE2b-256 e9d637193c52ff3c6fd0839ee4b8141939e63c0315c58d710f857ccbd9fceda1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page