A multiplexer for Large Language Model APIs built on the OpenAI SDK. It combines quotas from multiple models and automatically uses fallback models when the primary models are rate limited.
Project description
Multiplexer LLM (Python)
Unlock the Power of Distributed AI 🚀
A lightweight Python library that combines the quotas of multiple open source LLM providers with a single unified API. Seamlessly distribute your requests across various providers hosting open source models, ensuring maximum throughput and reliability.
The Problem: Limited AI Resources
- ❌ Rate Limit Errors: "Rate limit exceeded" errors hinder your application's performance
- ❌ Limited Throughput: Single provider constraints limit your AI capabilities
- ❌ Unpredictable Failures: Rate limits can occur at critical moments
- ❌ Manual Intervention: Switching providers requires code changes
The Solution: Unified Access to Multiple Providers
- ✅ Increased Throughput: Combine quotas from multiple open source LLM providers
- ✅ Error Resilience: Automatic failover when one provider hits rate limits
- ✅ Seamless Integration: Compatible with OpenAI SDK for easy adoption
- ✅ Smart Load Balancing: Weight-based distribution across providers for optimal performance
Key Benefits
- 🚀 Scalable AI: Combine resources from multiple providers for enhanced capabilities
- 🛡️ Error Prevention: Automatic failover minimizes rate limit failures
- ⚡ High Availability: Seamless switching between providers ensures continuous operation
- 🔌 OpenAI SDK Compatibility: Works with existing OpenAI SDK code
- 📊 Usage Analytics: Track provider performance and rate limits
How It Works
Single Model: [Model A: 10K RPM] ❌ Rate Limit Error at 10,001 requests
Multiple Providers: [Provider 1: 10K] + [Provider 2: 15K] + [Provider 3: 20K] = 45,000 RPM ✅
Multiple Models: [Model A: 10K] + [Model B: 50K] + [Model C: 15K] = 75,000 RPM ✅✅
Installation
pip install multiplexer-llm
The package requires Python 3.8+ and automatically installs the OpenAI Python SDK as a dependency.
Quick Start
import asyncio
import os
from multiplexer_llm import Multiplexer
from openai import AsyncOpenAI
async def main():
# Create client instances for a few open source models
model1 = AsyncOpenAI(
api_key=os.getenv("MODEL1_API_KEY"),
base_url="https://api.model1.com/v1/",
)
model2 = AsyncOpenAI(
api_key=os.getenv("MODEL2_API_KEY"),
base_url="https://api.model2.org/v1",
)
# Initialize multiplexer
async with Multiplexer() as multiplexer:
# Add models with weights
multiplexer.add_model(model1, 5, "model1-large")
multiplexer.add_model(model2, 3, "model2-base")
# Use like a regular OpenAI client
completion = await multiplexer.chat.completions.create(
model="placeholder", # Will be overridden by selected model
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
],
)
print(completion.choices[0].message.content)
print("Model usage stats:", multiplexer.get_stats())
# Run the async function
asyncio.run(main())
How Primary and Fallback Models Work
The multiplexer operates with a two-tier system:
Primary Models (add_model)
- First choice: Used when available
- Weight-based selection: Higher weights = higher probability of selection
Fallback Models (add_fallback_model)
- Backup safety net: Activated when all primary models hit rate limits
API Examples
Creating a Multiplexer
from multiplexer_llm import Multiplexer
# Create multiplexer instance
multiplexer = Multiplexer()
# Or use as async context manager (recommended)
async with Multiplexer() as multiplexer:
# Your code here
pass
Adding Models
# Add a primary model
multiplexer.add_model(client: AsyncOpenAI, weight: int, model_name: str)
# Add a fallback model
multiplexer.add_fallback_model(client: AsyncOpenAI, weight: int, model_name: str)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
About Haven Network
Haven Network builds open-source tools to help online communities produce high-quality data for multi-modal AI, with a strong focus on local inference and data privacy.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file multiplexer_llm-0.2.3.tar.gz.
File metadata
- Download URL: multiplexer_llm-0.2.3.tar.gz
- Upload date:
- Size: 21.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
72bd118b32c08a0dd4a9645915eeb80953cc9519cad1ff3b51a069e1c3694a74
|
|
| MD5 |
e84bd4edde691e43d542a46cb3f90e92
|
|
| BLAKE2b-256 |
5f8869ae6972aa22338919c31b057a5687c2c6db418d9e4f58eeb64661de7bbc
|
Provenance
The following attestation bundles were made for multiplexer_llm-0.2.3.tar.gz:
Publisher:
publish-multiplexer.yml on Haven-hvn/multiplexer-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
multiplexer_llm-0.2.3.tar.gz -
Subject digest:
72bd118b32c08a0dd4a9645915eeb80953cc9519cad1ff3b51a069e1c3694a74 - Sigstore transparency entry: 836410244
- Sigstore integration time:
-
Permalink:
Haven-hvn/multiplexer-llm@b1081042d757001a65f415f192eab5c1d7a9434d -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Haven-hvn
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-multiplexer.yml@b1081042d757001a65f415f192eab5c1d7a9434d -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file multiplexer_llm-0.2.3-py3-none-any.whl.
File metadata
- Download URL: multiplexer_llm-0.2.3-py3-none-any.whl
- Upload date:
- Size: 14.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e51eef30edb5768895dac7c00cce1b0b85ecac311eb5e6c203fd38e25205ec7
|
|
| MD5 |
bf0c8a91d6d073226a22f92428128b97
|
|
| BLAKE2b-256 |
c45b888e76c1e2245c3de9af0d7204d76829f9c1d890a4499223b2b136954305
|
Provenance
The following attestation bundles were made for multiplexer_llm-0.2.3-py3-none-any.whl:
Publisher:
publish-multiplexer.yml on Haven-hvn/multiplexer-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
multiplexer_llm-0.2.3-py3-none-any.whl -
Subject digest:
1e51eef30edb5768895dac7c00cce1b0b85ecac311eb5e6c203fd38e25205ec7 - Sigstore transparency entry: 836410247
- Sigstore integration time:
-
Permalink:
Haven-hvn/multiplexer-llm@b1081042d757001a65f415f192eab5c1d7a9434d -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Haven-hvn
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-multiplexer.yml@b1081042d757001a65f415f192eab5c1d7a9434d -
Trigger Event:
workflow_dispatch
-
Statement type: