Skip to main content

A multiplexer for Large Language Model APIs built on the OpenAI SDK. It combines quotas from multiple models and automatically uses fallback models when the primary models are rate limited.

Project description

Multiplexer LLM (Python)

Unlock the Power of Distributed AI 🚀

A lightweight Python library that combines the quotas of multiple open source LLM providers with a single unified API. Seamlessly distribute your requests across various providers hosting open source models, ensuring maximum throughput and reliability.

The Problem: Limited AI Resources

  • Rate Limit Errors: "Rate limit exceeded" errors hinder your application's performance
  • Limited Throughput: Single provider constraints limit your AI capabilities
  • Unpredictable Failures: Rate limits can occur at critical moments
  • Manual Intervention: Switching providers requires code changes

The Solution: Unified Access to Multiple Providers

  • Increased Throughput: Combine quotas from multiple open source LLM providers
  • Error Resilience: Automatic failover when one provider hits rate limits
  • Seamless Integration: Compatible with OpenAI SDK for easy adoption
  • Smart Load Balancing: Weight-based distribution across providers for optimal performance

Key Benefits

  • 🚀 Scalable AI: Combine resources from multiple providers for enhanced capabilities
  • 🛡️ Error Prevention: Automatic failover minimizes rate limit failures
  • High Availability: Seamless switching between providers ensures continuous operation
  • 🔌 OpenAI SDK Compatibility: Works with existing OpenAI SDK code
  • 📊 Usage Analytics: Track provider performance and rate limits

How It Works

Single Model:        [Model A: 10K RPM] ❌ Rate Limit Error at 10,001 requests
Multiple Providers:  [Provider 1: 10K] + [Provider 2: 15K] + [Provider 3: 20K] = 45,000 RPM ✅
Multiple Models:     [Model A: 10K] + [Model B: 50K] + [Model C: 15K] = 75,000 RPM ✅✅

Installation

pip install multiplexer-llm

The package requires Python 3.8+ and automatically installs the OpenAI Python SDK as a dependency.

Quick Start

import asyncio
import os
from multiplexer_llm import Multiplexer
from openai import AsyncOpenAI

async def main():
    # Create client instances for a few open source models
    model1 = AsyncOpenAI(
        api_key=os.getenv("MODEL1_API_KEY"),
        base_url="https://api.model1.com/v1/",
    )

    model2 = AsyncOpenAI(
        api_key=os.getenv("MODEL2_API_KEY"),
        base_url="https://api.model2.org/v1",
    )

    # Initialize multiplexer
    async with Multiplexer() as multiplexer:
        # Add models with weights
        multiplexer.add_model(model1, 5, "model1-large")
        multiplexer.add_model(model2, 3, "model2-base")

        # Use like a regular OpenAI client
        completion = await multiplexer.chat.completions.create(
            model="placeholder",  # Will be overridden by selected model
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of France?"},
            ],
        )

        print(completion.choices[0].message.content)
        print("Model usage stats:", multiplexer.get_stats())

# Run the async function
asyncio.run(main())

How Primary and Fallback Models Work

The multiplexer operates with a two-tier system:

Primary Models (add_model)

  • First choice: Used when available
  • Weight-based selection: Higher weights = higher probability of selection

Fallback Models (add_fallback_model)

  • Backup safety net: Activated when all primary models hit rate limits

API Examples

Creating a Multiplexer

from multiplexer_llm import Multiplexer

# Create multiplexer instance
multiplexer = Multiplexer()

# Or use as async context manager (recommended)
async with Multiplexer() as multiplexer:
    # Your code here
    pass

Adding Models

# Add a primary model
multiplexer.add_model(client: AsyncOpenAI, weight: int, model_name: str)

# Add a fallback model
multiplexer.add_fallback_model(client: AsyncOpenAI, weight: int, model_name: str)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

About Haven Network

Haven Network builds open-source tools to help online communities produce high-quality data for multi-modal AI, with a strong focus on local inference and data privacy.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multiplexer_llm-0.2.0.tar.gz (21.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multiplexer_llm-0.2.0-py3-none-any.whl (14.0 kB view details)

Uploaded Python 3

File details

Details for the file multiplexer_llm-0.2.0.tar.gz.

File metadata

  • Download URL: multiplexer_llm-0.2.0.tar.gz
  • Upload date:
  • Size: 21.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for multiplexer_llm-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9173c07992b613615b6245a0ec1353ba92e642d224b029c4232ef4bfb6a348cd
MD5 62f8c9c762eced043b65e6e8b8fe7d5f
BLAKE2b-256 e5eda9d79533db27806e7b0ee9b6870986dee3be970a25628a2b77a496583b1a

See more details on using hashes here.

Provenance

The following attestation bundles were made for multiplexer_llm-0.2.0.tar.gz:

Publisher: publish-multiplexer.yml on Haven-hvn/multiplexer-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file multiplexer_llm-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for multiplexer_llm-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ed29a0c6efb3b1c1d6fb232c3f882effbb4a525211a537bb08d8df90bb8bb5a2
MD5 29cd3ba7970cb7124d6f70cfa17c32e7
BLAKE2b-256 631e80751f834cee8e7abc248f6f1b5ee06bb2d4cf8a62816eae9bf5183fb967

See more details on using hashes here.

Provenance

The following attestation bundles were made for multiplexer_llm-0.2.0-py3-none-any.whl:

Publisher: publish-multiplexer.yml on Haven-hvn/multiplexer-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page