Skip to main content

"LLM Proxy Server" is OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.

Project description

License PyPI Tests Code Style

LLM Proxy Server

LLM Proxy Server is an OpenAI-compatible HTTP proxy server for various Large Language Models (LLMs) inference. It provides a unified interface for working with different AI providers through a single API endpoint that follows the OpenAI format. Stream like OpenAI, authenticate with your own API keys, and keep clients unchanged.

✨ Features

  • Provider Agnostic: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
  • Unified Interface: Access all models through the standard OpenAI API format
  • Dynamic Routing: Route requests to different LLM providers based on model name patterns
  • Stream Support: Full streaming support for real-time responses
  • API Key Management: Configurable API key validation and access control
  • Easy Configuration: Simple TOML configuration files for setup

🚀 Getting Started

Installation

pip install llm-proxy-server

Quick Start

  1. Create a config.toml file:
host = "0.0.0.0"
port = 8000

[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"

[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"

[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"

[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
  1. Start the server:
llm-proxy-server
  1. Use it with any OpenAI-compatible client:
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY_HERE",
    base_url="http://localhost:8000/v1"
)

completion = client.chat.completions.create(
    model="gpt-5",  # This will be routed to OpenAI based on config
    messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)

Or use the same endpoint with Claude models:

completion = client.chat.completions.create(
    model="claude-opus-4-1-20250805",  # This will be routed to Anthropic based on config
    messages=[{"role": "user", "content": "Hello, world!"}]
)

📝 Configuration

LLM Proxy Server is configured through a TOML file that specifies connections, routing rules, and access control.

Basic Structure

host = "0.0.0.0"  # Interface to bind to
port = 8000       # Port to listen on
dev_autoreload = false  # Enable for development

# API key validation function (optional)
check_api_key = "lm_proxy.core.check_api_key"

# LLM Provider Connections
[connections]

[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"

[connections.google]
api_type = "google_ai_studio"
api_key = "env:GOOGLE_API_KEY"

[connections.anthropic]
api_type = "anthropic"
api_key  = "env:ANTHROPIC_API_KEY"

# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*"     # Route all GPT models to OpenAI
"claude*" = "anthropic.*"  # Route all Claude models to Anthropic
"gemini*" = "google.*"  # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo"  # Default fallback

# Access control groups
[groups.default]
api_keys = [
    "KEY1",
    "KEY2"
]

# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"

Environment Variables

You can use environment variables in your configuration file by prefixing values with env::

[connections.openai]
api_key = "env:OPENAI_API_KEY"

Load these from a .env file or set them in your environment before starting the server.

🔑 Proxy API Keys vs. Provider API Keys

LLM Proxy Server utilizes two distinct types of API keys to facilitate secure and efficient request handling.

  • Proxy API Key (Virtual API Key, Client API Key):
    A unique key generated and managed within the LLM Proxy Server.
    Clients use these keys to authenticate their requests to the proxy's API endpoints.
    Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
    These keys allow users to securely interact with the proxy without direct access to external service credentials.

  • Provider API Key (Upstream API Key): A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the LLM Proxy Server.
    The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
    Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.

This distinction ensures a clear separation of concerns: Virtual API Keys manage user authentication and access within the proxy, while Upstream API Keys handle secure communication with external providers.

🔌 API Usage

LLM Proxy Server implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.

Endpoint

POST /v1/chat/completions

Request Format

{
  "model": "gpt-3.5-turbo",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is the capital of France?"}
  ],
  "temperature": 0.7,
  "stream": false
}

Response Format

{
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "finish_reason": "stop"
    }
  ]
}

🛠️ Advanced Usage

Custom API Key Validation

You can implement your own API key validation function:

# my_validators.py
def validate_api_key(api_key: str) -> str | None:
    """
    Validate an API key and return the group name if valid.
    
    Args:
        api_key: The API key to validate
        
    Returns:
        The name of the group if valid, None otherwise
    """
    if api_key == "secret-key":
        return "admin"
    elif api_key.startswith("user-"):
        return "users"
    return None

Then reference it in your config:

check_api_key = "my_validators.validate_api_key"

Dynamic Model Routing

The routing section allows flexible pattern matching with wildcards:

[routing]
"gpt-4*" = "openai.gpt-4"           # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*"           # Pass model name as-is to Anthropic
"gemini*" = "google.*"              # Pass model name as-is to Google
"custom*" = "local.llama-7b"        # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo"        # Default fallback for unmatched models

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details. © 2025 Vitalii Stepanenko

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_proxy_server-0.3.0.tar.gz (12.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_proxy_server-0.3.0-py3-none-any.whl (16.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_proxy_server-0.3.0.tar.gz.

File metadata

  • Download URL: llm_proxy_server-0.3.0.tar.gz
  • Upload date:
  • Size: 12.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for llm_proxy_server-0.3.0.tar.gz
Algorithm Hash digest
SHA256 5c7759da251489879951ddc09b71a5f65b3b6e92fb185d12255cc4d85cdfc93d
MD5 84fa398c5e1de6ec478490ad083472ca
BLAKE2b-256 40e6cc3864edf20cbb15a1d59b917a55e4208237af5bed3ddfcf0183998a76db

See more details on using hashes here.

File details

Details for the file llm_proxy_server-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_proxy_server-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4b9d00ba810019cca160ec2d2b9a2348cf07c17741b4c367636670d398712f83
MD5 ef680f46f22c051586ca1b24f3a412c4
BLAKE2b-256 5e1beb8d857e62e0996ed5059147f1d17a9741394fbdef03a174adeb99b20ca9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page