Unified multi-provider LLM interface supporting Claude, OpenAI, Mistral, Ollama, and Clipboard
Project description
multi-llm-provider
Unified multi-provider LLM interface supporting Claude, OpenAI, Mistral, Ollama, and Clipboard.
Installation
pip install multi-llm-provider
Optional providers
Install only the providers you need:
pip install multi-llm-provider[claude] # Anthropic Claude
pip install multi-llm-provider[openai] # OpenAI GPT
pip install multi-llm-provider[mistral] # Mistral AI
pip install multi-llm-provider[ollama] # Ollama (local)
pip install multi-llm-provider[all] # All providers
Quick start
from multi_llm_provider import AIProvider, AIProviderFactory
# Create an analyzer for Claude
analyzer = AIProviderFactory.create(
provider=AIProvider.CLAUDE_API,
model="claude-sonnet-4-20250514",
system_prompt="You are a helpful assistant.",
)
# Analyze content
result = analyzer.analyze("Explain quantum computing in simple terms.")
print(result)
Supported providers
| Provider | Enum value | Required extra |
|---|---|---|
| Anthropic Claude | AIProvider.CLAUDE_API |
claude |
| OpenAI GPT | AIProvider.OPENAI_API |
openai |
| Mistral AI | AIProvider.MISTRAL_API |
mistral |
| Ollama (local) | AIProvider.OLLAMA |
ollama |
| Clipboard (manual) | AIProvider.CLIPBOARD |
none |
Architecture
All providers implement the AIAnalyzer abstract base class:
from multi_llm_provider import AIAnalyzer
class AIAnalyzer(ABC):
@abstractmethod
def analyze(self, content: str) -> str:
"""Send content to the LLM and return the response."""
...
The AIProviderFactory creates the appropriate analyzer based on the provider enum:
analyzer = AIProviderFactory.create(
provider=AIProvider.OPENAI_API,
model="gpt-4o",
system_prompt="You are a data analyst.",
temperature=0.7,
max_tokens=4096,
)
Clipboard provider
The ClipboardAnalyzer copies content to the system clipboard with instructions,
then waits for the user to paste the LLM response back. Useful for manual workflows
or when API access is not available.
Error handling
from multi_llm_provider import ConfigError, WorkflowError
try:
result = analyzer.analyze("some content")
except WorkflowError as e:
print(f"LLM call failed: {e}")
except ConfigError as e:
print(f"Configuration error: {e}")
Development
git clone https://github.com/stephanejouve/multi-llm-provider.git
cd multi-llm-provider
poetry install --with dev,extras
poetry run pytest tests/ -v
poetry run black src/ tests/ --check --line-length=100
poetry run ruff check src/
poetry run isort src/ tests/ --check-only --profile black --line-length=100
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file multi_llm_provider-0.1.0.tar.gz.
File metadata
- Download URL: multi_llm_provider-0.1.0.tar.gz
- Upload date:
- Size: 11.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.13.11 Darwin/20.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
02d645b2bcdc88be06141e5302b9ab0445242e3b7dda8e7b917101adb4c5f33e
|
|
| MD5 |
e007820e5cfae36b152f7c4fb7756496
|
|
| BLAKE2b-256 |
57803d79e4fd8c46de643f4d0018f7a8d87a90a675d54fec57c0522eb0956fd0
|
File details
Details for the file multi_llm_provider-0.1.0-py3-none-any.whl.
File metadata
- Download URL: multi_llm_provider-0.1.0-py3-none-any.whl
- Upload date:
- Size: 15.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.13.11 Darwin/20.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e9b2185ec0a30e5ad45388da73f76f9251927035795b0194c017c334623d35bb
|
|
| MD5 |
66c1ceaa0e7c97f1316a1c7b7b11648b
|
|
| BLAKE2b-256 |
22b5cb752acd6c563df9b02346fa9d15a94646783bcf0f12515d5a612ae75f19
|