Unified interface for multiple LLM providers
Project description
Here's a comprehensive README.md file for your LLM Providers library:
LLM Providers
A unified Python library for interacting with multiple Large Language Model (LLM) providers with a consistent interface.
Features
- Multi-Provider Support: OpenAI, Claude, Gemini, Groq, DeepSeek, HuggingFace (API and local)
- Unified Interface: Consistent API across all providers
- Async Support: Full async/await implementation
- Streaming: Real-time text generation
- Caching: Disk-based caching with TTL
- Rate Limiting: Built-in rate limiting
- Error Handling: Comprehensive error handling and retries
- Type Safety: Pydantic models for input validation
- Extensible: Easy to add new providers
Installation
pip install llm-providers
Supported Providers
| Provider | API Support | Local Models | Streaming | Async |
|---|---|---|---|---|
| OpenAI | ✅ | ❌ | ✅ | ✅ |
| Claude | ✅ | ❌ | ❌ | ✅ |
| Gemini | ✅ | ❌ | ❌ | ✅ |
| Groq | ✅ | ❌ | ✅ | ✅ |
| DeepSeek | ✅ | ❌ | ✅ | ✅ |
| HuggingFace | ❌ | ❌ | ❌ | ❌ |
Quick Start
Basic Usage
from llmfusion import get_provider, LLMConfig, LLMInput
# Configure OpenAI
openai_config = LLMConfig(
api_key="your-openai-key",
model_name="gpt-4"
)
# Create client
openai = get_provider("openai", openai_config)
# Create input
input = LLMInput(
prompt="Explain quantum computing",
system_prompt="You are a physics expert",
temperature=0.7,
max_tokens=500
)
# Generate response
response = openai.generate(input)
print(response)
Async Usage
import asyncio
from llmfusion import get_provider, LLMConfig, LLMInput
async def main():
config = LLMConfig(
api_key="your-api-key",
model_name="claude-3"
)
claude = get_provider("claude", config)
input = LLMInput(prompt="Write a poem about AI")
response = await claude.agenerate(input)
print(response)
asyncio.run(main())
Streaming
from llmfusion import get_provider, LLMConfig, LLMInput
config = LLMConfig(
api_key="your-api-key",
model_name="gpt-4"
)
gpt = get_provider("openai", config)
input = LLMInput(prompt="Explain blockchain technology")
for chunk in gpt.stream(input):
print(chunk, end="", flush=True)
Configuration
LLMConfig Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str | None | Provider API key |
model_name |
str | Required | Model name/identifier |
base_url |
str | Provider default | Custom API endpoint |
timeout |
int | 30 | Request timeout in seconds |
max_retries |
int | 3 | Maximum retry attempts |
cache_ttl |
int | 3600 | Cache time-to-live in seconds |
rate_limit_rpm |
int | 1000 | Requests per minute limit |
device |
str | "auto" | Device for local models (cpu/cuda) |
Advanced Usage
Custom Cache Directory
from llmfusion import set_cache_dir
set_cache_dir("/path/to/cache")
Disable Caching
config = LLMConfig(
api_key="your-key",
model_name="gpt-4",
cache_ttl=0 # Disable caching
)
Using Local HuggingFace Models
config = LLMConfig(
model_name="mistralai/Mistral-7B-Instruct-v0.1"
)
hf = get_provider("huggingface", config)
Error Handling
The library provides comprehensive error handling with custom exceptions:
from llmfusion import (
LLMError,
RateLimitError,
AuthenticationError,
ProviderError
)
try:
response = client.generate(input)
except RateLimitError as e:
print("Rate limit exceeded:", str(e))
except AuthenticationError as e:
print("Authentication failed:", str(e))
except ProviderError as e:
print("Provider error:", str(e))
except LLMError as e:
print("General error:", str(e))
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Documentation
Full documentation is available at https://github.com/mbenhaddou/llm_providers
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgements
- Inspired by the need for a unified LLM interface
- Built with ❤️ by Mohamed Ben Haddou
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmfusion-0.2.2.tar.gz.
File metadata
- Download URL: llmfusion-0.2.2.tar.gz
- Upload date:
- Size: 17.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e2b45afe0417ca73db3cd55872f30bfb05ebda6a81db0ee4875fb4806114b25e
|
|
| MD5 |
120dc177e124a030fcf1027b19f4dff2
|
|
| BLAKE2b-256 |
268be9ee317189560412aabe834f94fb6fb4f636c6bf9467b3d73cb94f1cbea3
|
File details
Details for the file llmfusion-0.2.2-py3-none-any.whl.
File metadata
- Download URL: llmfusion-0.2.2-py3-none-any.whl
- Upload date:
- Size: 21.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8657c497cdb849cd37fc8d04834e0606169cd92cca6e2078ca0b7c30193eb190
|
|
| MD5 |
80d9626c690f566af6a69ae03a9d12c0
|
|
| BLAKE2b-256 |
6e43f661fb5febdc852df64e64f9b1c860f7771fc8a2a297d89f3cd80b1d1dfa
|