Skip to main content

A unified framework for LLM routing and evaluation.

Project description

LLMRouter Logo

๐Ÿš€ LLMRouter: An Open-Source Library for LLM Routing

Python PRs Slack Docs Twitter License

โœจ Introduction

LLMRouter Overview

LLMRouter is an intelligent routing system designed to optimize LLM inference by dynamically selecting the most suitable model for each query. To achieve intelligent routing, it defines:

  1. ๐Ÿš€ Smart Routing: Automatically routes queries to the optimal LLM based on task complexity, cost, and performance requirements.
  2. ๐Ÿ“Š Multiple Router Models: Support for over 16 routing models, organized into four major categoriesโ€”single-round routers, multi-round routers, agentic routers, and personalized routersโ€”covering a wide range of strategies such as KNN, SVM, MLP, Matrix Factorization, Elo Rating, graph-based routing, BERT-based routing, hybrid probabilistic methods, transformed-score routers, and more.
  3. ๐Ÿ› ๏ธ Unified CLI: Complete command-line interface for training, inference, and interactive chat with Gradio-based UI.
  4. ๐Ÿ“ˆ Data Generation Pipeline: Complete pipeline for generating training data from 11 benchmark datasets with automatic API calling and evaluation.

๐Ÿ“ฐ News

  • ๐Ÿ”— [2026-02]: OpenClaw Router - OpenAI-compatible server with OpenClaw integration! We've also released llmrouter-lib v0.3.0. Deploy LLMRouter as a production API server that works seamlessly with Slack, Discord, and other messaging platforms via OpenClaw. Features include multimodal understanding (image/audio/video), retrieval-augmented routing memory, streaming support, and all 16+ LLMRouter routing strategies. See OpenClaw Router Integration.

  • โญ [2026-01]: LLMRouter just crossed 1K GitHub stars! We've also released llmrouter-lib v0.2.0. Updates include service-specific dict configs (OpenAI, Anthropic, etc.) and multimodal routing (Video/Image + Text) on Geometry3K, MathVista, and Charades-Egoโ€”all in the first unified open-source LLM routing library with 16+ routers, a unified CLI, Gradio UI, and 11 datasets. Install via pip install llmrouter-lib. More updates soon! ๐Ÿš€

  • ๐Ÿš€ [2025-12]: LLMRouter is officially released - ship smarter ๐Ÿง , cost-aware ๐Ÿ’ธ LLM routing with 16+ routers ๐Ÿงญ, a unified llmrouter CLI ๐Ÿ› ๏ธ, and a plugin workflow for custom routers ๐Ÿงฉ.

๐Ÿ”— Links

๐Ÿงญ Supported Routers

Single-Round Routers

Router Training Inference Description Tutorial
knnrouter โœ… โœ… K-Nearest Neighbors based routing ๐Ÿ“–
svmrouter โœ… โœ… Support Vector Machine based routing ๐Ÿ“–
mlprouter โœ… โœ… Multi-Layer Perceptron based routing ๐Ÿ“–
mfrouter โœ… โœ… Matrix Factorization based routing ๐Ÿ“–
elorouter โœ… โœ… Elo Rating based routing ๐Ÿ“–
routerdc โœ… โœ… Dual Contrastive learning based routing ๐Ÿ“–
automix โœ… โœ… Automatic model mixing ๐Ÿ“–
hybrid_llm โœ… โœ… Hybrid LLM routing strategy ๐Ÿ“–
graphrouter โœ… โœ… Graph-based routing ๐Ÿ“–
causallm_router โœ… โœ… Causal Language Model router ๐Ÿ“–
smallest_llm N/A โœ… Always routes to smallest model ๐Ÿ“–
largest_llm N/A โœ… Always routes to largest model ๐Ÿ“–

Multi-Round Routers

Router Training Inference Description Tutorial
router_r1 LINK โœ… Pre-trained Router-R1 model for multi-turn conversations ๐Ÿ“–

Personalized Routers

Router Training Inference Description Tutorial
gmtrouter โœ… โœ… Graph-based personalized router with user preference learning ๐Ÿ“–
personalizedrouter โœ… โœ… GNN-based personalized router with user features ๐Ÿ“–

Agentic Routers

Router Training Inference Description Tutorial
knnmultiroundrouter โœ… โœ… KNN-based agentic router for complex tasks ๐Ÿ“–
llmmultiroundrouter N/A โœ… LLM-based agentic router for complex tasks ๐Ÿ“–

๐Ÿš€ Get Started

Installation

Install from source

Clone the repository and install in editable mode using a virtual environment (e.g., with anaconda3):

# Clone the repository
git clone https://github.com/ulab-uiuc/LLMRouter.git
cd LLMRouter

# Create and activate virtual environment
conda create -n llmrouter python=3.10
conda activate llmrouter

# Install the package (base installation)
pip install -e .

# Optional: Install with RouterR1 support (requires GPU)
# RouterR1 is tested with vllm==0.6.3 (torch==2.4.0); the extra pins these versions.
pip install -e ".[router-r1]"

# Optional: Install all optional dependencies
pip install -e ".[all]"

Install from PyPI

pip install llmrouter-lib

๐Ÿ”‘ Setting Up API Keys

LLMRouter requires API keys to make LLM API calls for inference, chat, and data generation. Set the API_KEYS environment variable using one of the following formats:

๐Ÿ’ก Free NVIDIA API Keys: The NVIDIA endpoints currently used in LLMRouter have freely available API keys. To get started, visit https://build.nvidia.com/ to create an account, then you can generate your API keys at no cost.

Service-Specific Dict Format (recommended for multiple providers)

Use this format when you have models from different service providers (e.g., NVIDIA, OpenAI, Anthropic) and want to use different API keys for each provider:

export API_KEYS='{"NVIDIA": "nvidia-key-1,nvidia-key-2", "OpenAI": ["openai-key-1", "openai-key-2"], "Anthropic": "anthropic-key-1"}'

Dict Format Details:

  • Keys: Service provider names (must match the service field in your LLM candidate JSON)
  • Values: Can be:
    • Comma-separated string: "key1,key2,key3"
    • JSON array: ["key1", "key2", "key3"]
    • Single string: "key1"
  • Service Matching: The system automatically matches the service field from your LLM candidate JSON to select the appropriate API keys
  • Round-Robin: Each service maintains its own round-robin counter for load balancing
  • Error Handling: If a service is not found in the dict, a clear error message will be raised with available services listed

Example LLM Candidate JSON with service field:

{
  "qwen2.5-7b-instruct": {
    "service": "NVIDIA",
    "model": "qwen/qwen2.5-7b-instruct",
    "api_endpoint": "https://integrate.api.nvidia.com/v1"
  },
  "gpt-4": {
    "service": "OpenAI",
    "model": "gpt-4",
    "api_endpoint": "https://api.openai.com/v1"
  }
}

Legacy Formats (for single provider or backward compatibility)

JSON Array Format (for multiple keys from same provider):

export API_KEYS='["your-key-1", "your-key-2", "your-key-3"]'

Comma-Separated Format (alternative for multiple keys):

export API_KEYS='key1,key2,key3'

Single Key (for one API key):

export API_KEYS='your-api-key'

Notes:

  • API keys are used for inference, chat interface, and data generation (Step 3 of the pipeline)
  • Multiple keys enable automatic load balancing across API calls
  • When using dict format, ensure the service field in your LLM candidate JSON matches the keys in your API_KEYS dict
  • The environment variable must be set before running inference, chat, or data generation commands
  • For persistent setup, add the export command to your shell profile (e.g., ~/.bashrc or ~/.zshrc)

๐ŸŒ Configuring API Endpoints

API endpoints can be specified at two levels (resolved in priority order):

  1. Per-Model (highest priority): api_endpoint field in LLM candidate JSON (default_llm.json)
  2. Router-Level (fallback): api_endpoint field in router YAML config
  3. Error: Raises descriptive error if neither is specified

LLM Candidate JSON (per-model endpoints):

{
  "qwen2.5-7b-instruct": {
    "model": "qwen/qwen2.5-7b-instruct",
    "api_endpoint": "https://integrate.api.nvidia.com/v1",
    ...
  },
  "custom-model": {
    "model": "custom/model-name",
    "api_endpoint": "https://api.customprovider.com/v1",
    ...
  }
}

Router YAML (default endpoint):

api_endpoint: 'https://integrate.api.nvidia.com/v1'  # Fallback for all models

Benefits: Different models can use different providers; easy migration; backward compatible with router configs.

For details, see Data Generation Pipeline documentation.

๐Ÿ–ฅ๏ธ Using Local LLM Models

LLMRouter supports locally hosted LLM inference servers that provide OpenAI-compatible APIs (e.g., Ollama, vLLM, SGLang). For local providers, you can use an empty string "" as the API key value - the system automatically detects localhost endpoints and handles authentication accordingly.

Example with Ollama:

export API_KEYS='{"Ollama": ""}'
{
  "gemma3": {
    "size": "3B",
    "feature": "Gemma 3B model hosted locally via Ollama",
    "input_price": 0.0,
    "output_price": 0.0,
    "model": "gemma3",
    "service": "Ollama",
    "api_endpoint": "http://localhost:11434/v1"
  }
}

Important: Use the /v1 endpoint (OpenAI-compatible), not the native API endpoints. Empty strings are automatically detected for localhost endpoints (localhost or 127.0.0.1).

๐Ÿงช Testing Model Availability

You can test the availability of different candidate models using the following curl commands. This is useful for verifying that your API keys work correctly and that specific models are accessible:

Note: If you're using the dict format for API_KEYS, extract the NVIDIA key first (e.g., using echo $API_KEYS | python3 -c "import sys, json; print(json.load(sys.stdin)['NVIDIA'].split(',')[0])"), or set a temporary variable with your NVIDIA API key.

# export API_KEYS=...

# Example API endpoint - adjust based on your configuration
# This example uses NVIDIA's endpoint, but you should use the endpoint
# specified in your LLM candidate JSON or router config
API_ENDPOINT="https://integrate.api.nvidia.com/v1/chat/completions"

# Example model list - adjust based on your LLM candidate configuration
# These are example models; replace with the actual model names/IDs
# from your LLM candidate JSON file
MODELS=(
  "qwen/qwen2.5-7b-instruct"
  "meta/llama-3.1-8b-instruct"
  "mistralai/mistral-7b-instruct-v0.3"
  "nvidia/llama-3.3-nemotron-super-49b-v1"
  "mistralai/mixtral-8x7b-instruct-v0.1"
  "mistralai/mixtral-8x22b-instruct-v0.1"
)

SYSTEM_PROMPT="Hello."
PROMPT="Hello."

for MODEL in "${MODELS[@]}"; do
  echo "===== $MODEL ====="

  curl "$API_ENDPOINT" \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $API_KEYS" \
    -d "{
      \"model\": \"$MODEL\",
      \"messages\": [
        {
          \"role\": \"system\",
          \"content\": \"$SYSTEM_PROMPT\"
        },
        {
          \"role\": \"user\",
          \"content\": \"$PROMPT\"
        }
      ],
      \"temperature\": 0.8,
      \"max_tokens\": 200
    }"

  echo
done

This script will test each model in the list and display the response, helping you verify which models are available and working with your API key.

๐Ÿ“Š Preparing Training Data

LLMRouter includes a complete data generation pipeline that transforms raw benchmark datasets into formatted routing data with embeddings. The pipeline supports 11 diverse benchmark datasets including Natural QA, Trivia QA, MMLU, GPQA, MBPP, HumanEval, GSM8K, CommonsenseQA, MATH, OpenbookQA, and ARC-Challenge.

๐Ÿ’ก Multimodal Integration: Learn how to incorporate complex multimodal tasks (Video/Image + Text) into LLMRouter by checking our Multimodal Task Guide. We currently support 5 multimodal tasks across 3 datasets (Geometry3K, MathVista, Charades-Ego).

Pipeline Overview

The data generation pipeline consists of three main steps:

  1. Generate Query Data - Extract queries from benchmark datasets and create train/test split JSONL files
  2. Generate LLM Embeddings - Create embeddings for LLM candidates from their metadata
  3. API Calling & Evaluation - Call LLM APIs, evaluate responses, and generate unified embeddings + routing data

Quick Start

Start with the sample configuration file:

# Step 1: Generate query data
python llmrouter/data/data_generation.py --config llmrouter/data/sample_config.yaml

# Step 2: Generate LLM embeddings
python llmrouter/data/generate_llm_embeddings.py --config llmrouter/data/sample_config.yaml

# Step 3: API calling & evaluation (requires API_KEYS - see "Setting Up API Keys" section above)
python llmrouter/data/api_calling_evaluation.py --config llmrouter/data/sample_config.yaml --workers 100

Output Files

The pipeline generates the following files:

  • Query Data (JSONL): query_data_train.jsonl and query_data_test.jsonl - Query data with train/test split
  • LLM Embeddings (JSON): default_llm_embeddings.json - LLM metadata with embeddings
  • Query Embeddings (PyTorch): query_embeddings_longformer.pt - Unified embeddings for all queries
  • Routing Data (JSONL): default_routing_train_data.jsonl and default_routing_test_data.jsonl - Complete routing data with model responses, performance scores, and token usage

Example routing data entry:

{
  "task_name": "gsm8k",
  "query": "Janet has 4 apples. She gives 2 to Bob. How many does she have left?",
  "ground_truth": "2",
  "metric": "GSM8K",
  "model_name": "llama3-chatqa-1.5-8b",
  "response": "Janet has 4 apples and gives 2 to Bob, so she has 4 - 2 = 2 apples left.",
  "performance": 1.0,
  "embedding_id": 42,
  "token_num": 453
}

Configuration

All paths and parameters are controlled via YAML configuration. The sample config file (llmrouter/data/sample_config.yaml) references the example data directory and can be used as-is or customized for your setup.

Note: Step 3 requires API keys for calling LLM services. See the Setting Up API Keys section above for configuration details.

For complete documentation including detailed file formats, embedding mapping system, configuration options, and troubleshooting, see llmrouter/data/README.md.

Training a Router

Before training, ensure you have prepared your data using the Data Generation Pipeline or use the example data in data/example_data/.

Train various router models with your configuration:

# Train KNN router
llmrouter train --router knnrouter --config configs/model_config_train/knnrouter.yaml

# Train MLP router with GPU
CUDA_VISIBLE_DEVICES=2 llmrouter train --router mlprouter --config configs/model_config_train/mlprouter.yaml --device cuda

# Train MF router quietly
CUDA_VISIBLE_DEVICES=1 llmrouter train --router mfrouter --config configs/model_config_train/mfrouter.yaml --device cuda --quiet

Running Inference

Perform inference with trained routers (requires API keys - see Setting Up API Keys section):

# Single query inference
llmrouter infer --router knnrouter --config config.yaml --query "What is machine learning?"

# Batch inference from file
llmrouter infer --router knnrouter --config config.yaml --input queries.txt --output results.json

# Route only (without calling LLM API - no API keys needed)
llmrouter infer --router knnrouter --config config.yaml --query "Hello" --route-only

# Custom generation parameters
llmrouter infer --router knnrouter --config config.yaml --query "Explain AI" --temp 0.7 --max-tokens 2048 --verbose

Input file formats supported: .txt (one query per line), .json (list of strings or objects with "query" field), .jsonl (one JSON object per line).

Interactive Chat Interface

๐Ÿ“ฑ Quick Preview: Animated overview of the LLMRouter chat interface showing real-time routing and model selection.

Your browser does not support the video tag.

Launch the chat interface (requires API keys - see Setting Up API Keys section):

# Basic chat interface
llmrouter chat --router knnrouter --config config.yaml

# Custom host and port
llmrouter chat --router knnrouter --config config.yaml --host 0.0.0.0 --port 7860

# With public sharing link
llmrouter chat --router knnrouter --config config.yaml --share

# Specify query mode
llmrouter chat --router knnrouter --config config.yaml --mode full_context --top_k 5

Query Modes:

  • current_only: Routes based on current query only (default)
  • full_context: Combines all chat history with current query
  • retrieval: Retrieves top-k similar historical queries for context

Direct Script Execution

You can also run the CLI scripts directly:

# Training
python -m llmrouter.cli.router_train --router knnrouter --config config.yaml

# Inference
python -m llmrouter.cli.router_inference --router knnrouter --config config.yaml --query "Hello"

# Chat
python -m llmrouter.cli.router_chat --router knnrouter --config config.yaml

๐Ÿ”ง Creating Your Own Routers

LLMRouter supports a plugin system that allows you to add custom router implementations without modifying the core codebase. This makes it easy to experiment with new routing strategies or domain-specific routers.

Quick Start

1. Create your router directory:

mkdir -p custom_routers/my_router

2. Implement your router (custom_routers/my_router/router.py):

from llmrouter.models.meta_router import MetaRouter
import torch.nn as nn

class MyRouter(MetaRouter):
    """Your custom router implementation."""

    def __init__(self, yaml_path: str):
        # Initialize with a model (can be nn.Identity() for simple routers)
        model = nn.Identity()
        super().__init__(model=model, yaml_path=yaml_path)

        # Get available LLM names from config
        self.llm_names = list(self.llm_data.keys())

    def route_single(self, query_input: dict) -> dict:
        """Route a single query to the best LLM."""
        query = query_input['query']

        # Your custom routing logic here
        # Example: route based on query length
        selected_llm = (self.llm_names[0] if len(query) < 50
                       else self.llm_names[-1])

        return {
            "query": query,
            "model_name": selected_llm,
            "predicted_llm": selected_llm,
        }

    def route_batch(self, batch: list) -> list:
        """Route multiple queries."""
        return [self.route_single(q) for q in batch]

3. Create configuration (custom_routers/my_router/config.yaml):

data_path:
  llm_data: 'data/example_data/llm_candidates/default_llm.json'

hparam:
  # Your hyperparameters here

# Optional: Default API endpoint (used as fallback if models don't specify their own)
# Individual models can override this by specifying api_endpoint in the llm_data JSON file
api_endpoint: 'https://integrate.api.nvidia.com/v1'

4. Use your custom router (same as built-in routers!):

# Inference
llmrouter infer --router my_router \
  --config custom_routers/my_router/config.yaml \
  --query "What is machine learning?"

# List all routers (including custom ones)
llmrouter list-routers

Plugin Discovery

Custom routers are automatically discovered from:

  • ./custom_routers/ (recommended - project directory)
  • ~/.llmrouter/plugins/ (user home directory)
  • $LLMROUTER_PLUGINS environment variable (colon-separated paths)

Example Routers

LLMRouter includes example custom routers you can learn from:

RandomRouter - Simple baseline that randomly selects an LLM

llmrouter infer --router randomrouter \
  --config custom_routers/randomrouter/config.yaml \
  --query "Hello world"

ThresholdRouter - Advanced trainable router with difficulty estimation

# Train the router
llmrouter train --router thresholdrouter \
  --config custom_routers/thresholdrouter/config.yaml

# Use for inference
llmrouter infer --router thresholdrouter \
  --config custom_routers/thresholdrouter/config.yaml \
  --query "Explain quantum computing"

Documentation

For detailed guides on creating custom routers:

Common Routing Patterns

Rule-based routing:

def route_single(self, query_input):
    query = query_input['query'].lower()
    if 'code' in query:
        return {"model_name": "code-specialist"}
    elif len(query) < 50:
        return {"model_name": "small-fast-model"}
    else:
        return {"model_name": "large-capable-model"}

Embedding-based routing:

from llmrouter.utils import get_longformer_embedding

def route_single(self, query_input):
    embedding = get_longformer_embedding(query_input['query'])
    # Use embedding similarity to select best model
    selected = self._find_best_model(embedding)
    return {"model_name": selected}

Cost-optimized routing:

def route_single(self, query_input):
    difficulty = self._estimate_difficulty(query_input)
    # Select cheapest model that can handle the difficulty
    for model_name, info in sorted(self.llm_data.items(),
                                   key=lambda x: x[1]['cost']):
        if info['capability'] >= difficulty:
            return {"model_name": model_name}

๐Ÿ“ Adding Your Own Tasks

LLMRouter supports custom task definitions that allow you to add new task types with custom prompt templates and evaluation metrics. Custom tasks are automatically discovered and integrated into the data generation and evaluation pipeline.

Quick Start

1. Create a task formatter (custom_tasks/my_tasks.py):

from llmrouter.utils.prompting import register_prompt
from llmrouter.prompts import load_prompt_template

@register_prompt('my_task', default_metric='my_metric')
def format_my_task_prompt(sample_data):
    system_prompt = load_prompt_template("task_my_task")
    user_query = f"Question: {sample_data.get('query', '')}"
    return {"system": system_prompt, "user": user_query}

2. Create a prompt template (custom_tasks/task_prompts/task_my_task.yaml):

template: |
  You are an expert at [task description]. [Instructions].

3. Register a custom metric (optional):

from llmrouter.evaluation import evaluation_metric

@evaluation_metric('my_metric')
def my_metric(prediction: str, ground_truth: str, **kwargs) -> float:
    return 1.0 if prediction == ground_truth else 0.0

4. Use your custom task:

import custom_tasks.my_tasks  # Import triggers registration

from llmrouter.utils import generate_task_query
from llmrouter.utils.evaluation import calculate_task_performance

# Generate prompt
prompt = generate_task_query('my_task', {'query': '...'})

# Evaluate (metric automatically inferred from task)
score = calculate_task_performance(
    prediction="...", 
    ground_truth="...", 
    task_name="my_task"
)

Documentation

For detailed guides on creating custom tasks:

๐ŸŽฅ Hands-on: Multi-View Video Tasks

Follow our step-by-step walkthrough in the Charades-Ego Integration Guide to process paired egocentric videos, generate VLM-based features, and train routers for Activity, Object, and Verb recognition.

๐Ÿ”Œ OpenClaw Router (OpenClaw Integration)

OpenClaw Router is an OpenAI-compatible API server that brings LLMRouter's intelligent routing to production environments. It integrates seamlessly with OpenClaw, enabling you to deploy LLM routing via Slack, Discord, and other messaging platforms.

Why OpenClaw Router?

Feature Benefit
OpenAI-Compatible API Drop-in replacement for any OpenAI client (/v1/chat/completions)
All Routing Strategies Use any of the 16+ LLMRouter strategies (KNN, SVM, MLP, LLM-based, etc.)
Multimodal Understanding Process images, audio, and video - convert to text for routing decisions
Routing Memory Persist queryโ†’model history; retrieve similar past routes for better decisions
Streaming Support Full streaming responses with optional [model_name] prefix
Multi-Provider Route to Together AI, NVIDIA, OpenAI, Anthropic, or local models

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Slack/Discord  โ”‚โ”€โ”€โ”€โ”€โ–ถโ”‚   OpenClaw Gateway   โ”‚โ”€โ”€โ”€โ”€โ–ถโ”‚   OpenClaw Router    โ”‚
โ”‚  (Mobile/Web)   โ”‚     โ”‚   (Socket Mode)      โ”‚     โ”‚   (Port 8000)       โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                                                 โ”‚
                        โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                        โ”‚                                        โ”‚                                        โ”‚
                        โ–ผ                                        โ–ผ                                        โ–ผ
              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                      โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                      โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
              โ”‚   Fast Model    โ”‚                      โ”‚ Balanced Model  โ”‚                      โ”‚ Powerful Model  โ”‚
              โ”‚   (e.g. 8B)     โ”‚                      โ”‚   (e.g. 70B)    โ”‚                      โ”‚  (e.g. 405B)    โ”‚
              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                      โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                      โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Quick Start

1. Configure OpenClaw Router (openclaw_router/config.yaml):

serve:
  host: "0.0.0.0"
  port: 8000
  show_model_prefix: true

router:
  strategy: llm  # or: random, round_robin, rules, llmrouter
  provider: together
  base_url: https://api.together.xyz/v1
  model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"

api_keys:
  together: ${TOGETHER_API_KEY}

llms:
  llama-3.1-8b:
    provider: together
    model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"
    base_url: https://api.together.xyz/v1
    description: "Fast responses"

  llama-3.3-70b:
    provider: together
    model: "meta-llama/Llama-3.3-70B-Instruct-Turbo"
    base_url: https://api.together.xyz/v1
    description: "Complex reasoning"

2. Start the server:

# Using the startup script (recommended - also starts OpenClaw gateway)
./scripts/start-openclaw.sh

# Or directly via CLI
llmrouter serve --config openclaw_router/config.yaml

# With ML-based router
llmrouter serve --config openclaw_router/config.yaml --router knnrouter

3. Test the API:

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "messages": [{"role": "user", "content": "Explain quantum computing"}]
  }'

Optional Features

Routing Memory (retrieval-augmented routing):

memory:
  enabled: true
  path: "${HOME}/.llmrouter/openclaw_memory.jsonl"
  top_k: 10
  retriever_model: "facebook/contriever-msmarco"

Media Understanding (multimodal support):

media:
  enabled: true
  vision_model: "Qwen/Qwen3-VL-8B-Instruct"
  audio_model: "openai/whisper-large-v3"

Documentation

For complete setup instructions including Slack/Discord integration:

๐Ÿ—บ๏ธ TODO

  • Improve personalized routers: stronger user profiling, cold-start strategies, and online feedback updates.
  • Integrate a multimodal router: support image/audio inputs and route by modality + task type to the right multimodal model.
  • Add continual/online learning to adapt routers to domain drift (e.g., periodic re-training + feedback loops).

๐Ÿ™ Acknowledgments

LLMRouter builds upon the excellent research from the community. We gratefully acknowledge the following works that inspired our router implementations:

  • RouteLLM - Learning to Route LLMs with Preference Data (ICLR 2025)
  • RouterDC - Query-Based Router by Dual Contrastive Learning (NeurIPS 2024)
  • AutoMix - Automatically Mixing Language Models (NeurIPS 2024)
  • Hybrid LLM - Cost-Efficient and Quality-Aware Query Routing (ICLR 2024)
  • GraphRouter - A Graph-based Router for LLM Selections (ICLR 2025)
  • GMTRouter - Personalized LLM Router over Multi-turn User Interactions
  • PersonalizedRouter - Personalized LLM Routing via Graph-based User Preference Modeling
  • Router-R1 - Teaching LLMs Multi-Round Routing and Aggregation via RL (NeurIPS 2025)
  • FusionFactory - Fusing LLM Capabilities with Multi-LLM Log Data

We warmly welcome contributions from the community! A powerful open-source router framework requires the collective effort of everyone. If you have developed a new routing method, please consider submitting a PR to add it to LLMRouter. Together, we can build the most comprehensive LLM routing library!

๐Ÿค Contribution

We warmly welcome contributions from the community. LLMRouter is a living, extensible research framework, and its impact grows through the creativity and expertise of its contributors.

If you have developed a new routing strategy, learning objective, training paradigm, or evaluation protocol, we strongly encourage you to submit a pull request to integrate it into LLMRouter. All accepted contributions are explicitly credited, documented, and made available to a broad research and practitioner audience.

Contributing to LLMRouter is more than adding code. It is an opportunity to increase the visibility, adoption, and long-term impact of your work within the LLM systems community. Together, we aim to build the most comprehensive and extensible open-source library for LLM routing.

Notable contributions may be highlighted in documentation, examples, benchmarks, or future releases.


Star History

Star History Chart

๐Ÿ“š Citation

If you find LLMRouter useful for your research or projects, please cite it as:

@misc{llmrouter2025,
  title        = {LLMRouter: An Open-Source Library for LLM Routing},
  author       = {Tao Feng and Haozhen Zhang and Zijie Lei and Haodong Yue and Chongshan Lin and Ge Liu and Jiaxuan You},
  year         = {2025},
  howpublished = {\url{https://github.com/ulab-uiuc/LLMRouter}},
  note         = {GitHub repository}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmrouter_lib-0.3.1.tar.gz (247.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmrouter_lib-0.3.1-py3-none-any.whl (284.9 kB view details)

Uploaded Python 3

File details

Details for the file llmrouter_lib-0.3.1.tar.gz.

File metadata

  • Download URL: llmrouter_lib-0.3.1.tar.gz
  • Upload date:
  • Size: 247.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for llmrouter_lib-0.3.1.tar.gz
Algorithm Hash digest
SHA256 1e4aa31f0443c4284da8de92e11b47eb032f657c075445b262a8d68cf25a0552
MD5 9fcf4d6a7c25247c2b30ec03c4e42cbb
BLAKE2b-256 199afeb82015cfb7a1129fc9912b4cc46106942731b96435817b506dcbeb80ab

See more details on using hashes here.

File details

Details for the file llmrouter_lib-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: llmrouter_lib-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 284.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for llmrouter_lib-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6d3010569385277e6b126ca5ca558a0cd4831f597a365420775554698d2b9fd8
MD5 ae0e7d44cd47b8ba072e5d18a6af5d7c
BLAKE2b-256 f272f85c6a6c3fe8dfb9f88889a02984194a49f8d071981996bf07ba7c28fb85

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page