Skip to main content

Multi-model routing and load balancing system with OpenAI-compatible API

Project description

Router-Maestro

CI Release

Multi-model routing router with OpenAI-compatible and Anthropic-compatible APIs. Route LLM requests across GitHub Copilot, OpenAI, Anthropic, and custom providers with intelligent fallback and priority-based selection.

TL;DR

Use GitHub Copilot's models (Claude, GPT-4o, o3-mini) with Claude Code or any OpenAI/Anthropic-compatible client.

Router-Maestro acts as a proxy that gives you access to models from multiple providers through a unified API. Authenticate once with GitHub Copilot, and use its models anywhere that supports OpenAI or Anthropic APIs.

Features

  • Multi-provider support: GitHub Copilot (OAuth), OpenAI, Anthropic, and custom OpenAI-compatible endpoints
  • Intelligent routing: Priority-based model selection with automatic fallback on failure
  • Dual API compatibility: Both OpenAI (/v1/...) and Anthropic (/v1/messages) API formats
  • Gemini API compatibility: Gemini REST API format (/api/gemini/v1beta/...) for Gemini CLI/SDK
  • Cross-provider translation: Seamlessly route OpenAI requests to Anthropic providers and vice versa
  • Configuration hot-reload: Auto-reload config files every 5 minutes without server restart
  • CLI management: Full command-line interface for configuration and server control
  • Docker ready: Production-ready Docker images with Traefik integration

Table of Contents

Quick Start

Get up and running in 4 steps:

https://github.com/user-attachments/assets/8f60ec7a-4fbe-4342-9408-084073a4d48d

1. Start the Server

Docker (recommended)

docker run -d -p 8080:8080 \
  -v ~/.local/share/router-maestro:/home/maestro/.local/share/router-maestro \
  -v ~/.config/router-maestro:/home/maestro/.config/router-maestro \
  likanwen/router-maestro:latest

Install locally

pip install router-maestro
router-maestro server start --port 8080

2. Set Context (for Docker or Remote)

When running via Docker in remote VPS, set up a context to communicate with the containerized server:

pip install router-maestro  # Install CLI locally
router-maestro context add docker --endpoint http://localhost:8080
router-maestro context set docker

What's a context? A context is a named connection profile (endpoint + API key) that lets you manage local or remote Router-Maestro servers. See Contexts for details.

3. Authenticate with GitHub Copilot

router-maestro auth login github-copilot

# Follow the prompts:
#   1. Visit https://github.com/login/device
#   2. Enter the displayed code
#   3. Authorize "GitHub Copilot Chat"

4. Configure Your CLI Tool

Claude Code

router-maestro config claude-code
# Follow the wizard to select models

OpenAI Codex (CLI, Extension, App)

router-maestro config codex
# Follow the wizard to select models

Gemini CLI

router-maestro config gemini
# Follow the wizard to select models

After configuration, set the API key environment variable:

# Get your API key
router-maestro server show-key

# Set the environment variable (add to your shell profile)
export ROUTER_MAESTRO_API_KEY="your-api-key-here"

Done! Now run claude or codex and your requests will route through Router-Maestro.

For production deployment, see the Deployment section.

Core Concepts

Model Identification

Models are identified using the format {provider}/{model-id}:

Example Description
github-copilot/gpt-4o GPT-4o via GitHub Copilot
github-copilot/claude-sonnet-4 Claude Sonnet 4 via GitHub Copilot
openai/gpt-4-turbo GPT-4 Turbo via OpenAI
anthropic/claude-3-5-sonnet Claude 3.5 Sonnet via Anthropic

Fuzzy matching: You don't need to type exact model IDs. Router-Maestro will fuzzy-match common variations:

You type Resolves to
Opus 4.6 claude-opus-4-6-20250617
opus-4-6 claude-opus-4-6-20250617
claude-sonnet-4.5 claude-sonnet-4-5-20250929
anthropic/sonnet-4-5 Sonnet 4.5 via Anthropic only

When multiple versions match, the newest (by date suffix) is selected automatically.

Auto-Routing

Use the special model name router-maestro for automatic provider selection:

{"model": "router-maestro", "messages": [...]}

The router will try models in priority order and fall back to the next on failure.

Priority & Fallback

Priority determines which model is tried first when using auto-routing.

# Set priorities
router-maestro model priority github-copilot/claude-sonnet-4 --position 1
router-maestro model priority github-copilot/gpt-4o --position 2

# View priorities
router-maestro model priority list

Fallback triggers when a request fails with a retryable error (429, 5xx):

Strategy Behavior
priority Try next model in priorities list
same-model Try same model on different provider
none Fail immediately

Configure in ~/.config/router-maestro/priorities.json:

{
  "priorities": ["github-copilot/claude-sonnet-4", "github-copilot/gpt-4o"],
  "fallback": {"strategy": "priority", "maxRetries": 2}
}

Cross-Provider Translation

Router-Maestro automatically translates between OpenAI and Anthropic formats:

# Use Anthropic API with OpenAI provider
POST /v1/messages  {"model": "openai/gpt-4o", ...}

# Use OpenAI API with Anthropic provider
POST /v1/chat/completions  {"model": "anthropic/claude-3-5-sonnet", ...}

Contexts

A context is a named connection profile that stores an endpoint URL and API key. Contexts let you manage multiple Router-Maestro deployments from a single CLI.

Context Use Case
local Default context for router-maestro server start
docker Connect to a local Docker container
my-vps Connect to a remote VPS deployment
# Add a context
router-maestro context add my-vps --endpoint https://api.example.com --api-key xxx

# Switch contexts
router-maestro context set my-vps

# All CLI commands now target the remote server
router-maestro model list

CLI Reference

Server

Command Description
server start --port 8080 Start the server
server stop Stop the server
server info Show server status

Authentication

Command Description
auth login [provider] Authenticate with a provider
auth logout <provider> Remove authentication
auth list List authenticated providers

Models

Command Description
model list List available models
model refresh Refresh models cache
model priority list Show priorities
model priority <model> --position <n> Set priority
model fallback show Show fallback config

Contexts (Remote Management)

Command Description
context show Show current context
context list List all contexts
context set <name> Switch context
context add <name> --endpoint <url> --api-key <key> Add remote context
context test Test connection

Other

Command Description
config claude-code Generate Claude Code settings
config codex Generate Codex config (CLI/Extension/App)
config gemini Generate Gemini CLI .env

API Reference

OpenAI-Compatible

# Chat completions
POST /v1/chat/completions
{
  "model": "github-copilot/gpt-4o",
  "messages": [{"role": "user", "content": "Hello"}],
  "stream": false
}

# List models
GET /v1/models

Anthropic-Compatible

# Messages
POST /v1/messages
POST /api/anthropic/v1/messages
{
  "model": "github-copilot/claude-sonnet-4",
  "max_tokens": 1024,
  "messages": [{"role": "user", "content": "Hello"}]
}

# Count tokens
POST /v1/messages/count_tokens

Admin

POST /api/admin/models/refresh   # Refresh model cache

Gemini-Compatible

# Generate content (non-streaming)
POST /api/gemini/v1beta/models/{model}:generateContent
{
  "contents": [{"role": "user", "parts": [{"text": "Hello"}]}]
}

# Stream generate content (SSE)
POST /api/gemini/v1beta/models/{model}:streamGenerateContent?alt=sse
{
  "contents": [{"role": "user", "parts": [{"text": "Hello"}]}]
}

# Count tokens
POST /api/gemini/v1beta/models/{model}:countTokens
{
  "contents": [{"role": "user", "parts": [{"text": "Hello"}]}]
}

Configuration

File Locations

Following XDG Base Directory specification:

Type Path Contents
Config ~/.config/router-maestro/
providers.json Custom provider definitions
priorities.json Model priorities and fallback
contexts.json Deployment contexts
Data ~/.local/share/router-maestro/
auth.json OAuth tokens
server.json Server state

Custom Providers

Add OpenAI-compatible providers in ~/.config/router-maestro/providers.json:

{
  "providers": {
    "ollama": {
      "type": "openai-compatible",
      "baseURL": "http://localhost:11434/v1",
      "models": {
        "llama3": {"name": "Llama 3"},
        "mistral": {"name": "Mistral 7B"}
      }
    }
  }
}

Set API keys via environment variables (uppercase, hyphens → underscores):

export OLLAMA_API_KEY="sk-..."

Hot-Reload

Configuration files are automatically reloaded every 5 minutes:

File Auto-Reload
priorities.json ✓ (5 min)
providers.json ✓ (5 min)
auth.json Requires restart

Force immediate reload:

router-maestro model refresh

Deployment

Docker Deployment

Deploy to a VPS with Docker Compose:

# On your VPS
git clone https://github.com/likanwen/router-maestro.git
cd router-maestro
cp .env.example .env  # Edit with your domain
docker compose up -d

Configure .env:

DOMAIN=api.example.com
CF_DNS_API_TOKEN=your_cloudflare_token  # For HTTPS
ACME_EMAIL=your@email.com
ROUTER_MAESTRO_API_KEY=$(openssl rand -hex 32)

Authenticate inside the container:

docker compose exec router-maestro /bin/sh
router-maestro auth login github-copilot
# Follow OAuth flow, then exit

Remote Management

Manage your VPS deployment from your local machine using contexts:

# Add remote context
router-maestro context add my-vps \
  --endpoint https://api.example.com \
  --api-key your_api_key

# Switch to remote
router-maestro context set my-vps

# Now all commands target the VPS
router-maestro model list

HTTPS with Traefik

The Docker Compose setup includes Traefik for automatic HTTPS via Let's Encrypt with DNS challenge.

For detailed configuration options including:

  • Other DNS providers (Route53, DigitalOcean, etc.)
  • HTTP challenge setup
  • Traefik dashboard configuration

See docs/deployment.md.

License

MIT License - see LICENSE file.

Changelog

See CHANGELOG.md for release history.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

router_maestro-0.1.19.tar.gz (213.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

router_maestro-0.1.19-py3-none-any.whl (110.0 kB view details)

Uploaded Python 3

File details

Details for the file router_maestro-0.1.19.tar.gz.

File metadata

  • Download URL: router_maestro-0.1.19.tar.gz
  • Upload date:
  • Size: 213.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for router_maestro-0.1.19.tar.gz
Algorithm Hash digest
SHA256 517bd4e048df9b7d0f3cbda0479a9b5afd6bede2a3d7609c8b847a716cb4f649
MD5 3bc210ea5eb049f055ff87630307631a
BLAKE2b-256 673902d50e77f6d876d73ea7bf8c824e86ab520cd03154d0d856a24b55959304

See more details on using hashes here.

Provenance

The following attestation bundles were made for router_maestro-0.1.19.tar.gz:

Publisher: release.yml on MadSkittles/Router-Maestro

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file router_maestro-0.1.19-py3-none-any.whl.

File metadata

File hashes

Hashes for router_maestro-0.1.19-py3-none-any.whl
Algorithm Hash digest
SHA256 d3c92489c20cac8bc19296476be467303866a0421b580855a88084c12b582d41
MD5 d2041b9d459060d00aaf34e4206535e3
BLAKE2b-256 849cf9bab999709b86f088e5f094317bbe336bbcb897954fd290307507c6c283

See more details on using hashes here.

Provenance

The following attestation bundles were made for router_maestro-0.1.19-py3-none-any.whl:

Publisher: release.yml on MadSkittles/Router-Maestro

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page