Skip to main content

A lightweight AI model router for seamlessly switching between multiple AI providers (OpenAI, Anthropic, Google AI) with unified API interface.

Project description

v-router

A unified LLM interface that provides automatic fallback between different LLM providers. Route your AI requests seamlessly across Anthropic, OpenAI, Google, and Azure with intelligent failover strategies and a consistent API.

✨ Features

  • 🚀 Automatic Fallback: Seamless switching between models and providers when failures occur
  • 📚 Unified API: Same interface works across all major LLM providers
  • ⚡ Smart Routing: Intelligent model selection based on availability and configuration
  • 🔧 Function Calling: Unified tool calling interface across all providers
  • 🖼️ Multimodal Support: Send images and PDFs with automatic format conversion
  • 🎯 Consistent Responses: Standardized response format regardless of provider
  • ⚙️ Flexible Configuration: Fine-tune parameters, backup models, and provider priorities

📦 Installation

pip install v-router

Or with development dependencies:

uv sync --all-extras

🚀 Quick Start

Basic Usage

from v_router import Client, LLM

# Create an LLM configuration
llm_config = LLM(
    model_name="claude-sonnet-4",
    provider="anthropic",
    max_tokens=100,
    temperature=0.7
)

# Create a client
client = Client(llm_config)

# Send a message
response = await client.messages.create(
    messages=[
        {"role": "user", "content": "Hello! Explain quantum computing in one sentence."}
    ]
)

print(f"Response: {response.content[0].text}")
print(f"Model: {response.model}")
print(f"Provider: {response.provider}")

Automatic Fallback

Configure backup models to ensure reliability:

from v_router import Client, LLM, BackupModel

llm_config = LLM(
    model_name="claude-6",  # Primary model (might fail)
    provider="anthropic",
    backup_models=[
        BackupModel(
            model=LLM(model_name="gpt-4o", provider="openai"),
            priority=1
        ),
        BackupModel(
            model=LLM(model_name="gemini-1.5-pro", provider="google"),
            priority=2
        )
    ]
)

client = Client(llm_config)

# If claude-6 fails, automatically tries gpt-4o, then gemini-1.5-pro
response = await client.messages.create(
    messages=[{"role": "user", "content": "What's 2+2?"}]
)

Cross-Provider Switching

Enable automatic cross-provider fallback for the same model:

llm_config = LLM(
    model_name="claude-opus-4",
    provider="vertexai",  # Try Vertex AI first
    try_other_providers=True  # Fall back to direct Anthropic if needed
)

client = Client(llm_config)
response = await client.messages.create(
    messages=[{"role": "user", "content": "Tell me a joke."}]
)

🎯 Provider-Specific Parameters

v-router supports provider-specific features through a flexible parameter system:

from v_router import Client, LLM

# Configure core routing parameters
client = Client(
    llm_config=LLM(
        model_name="claude-opus-4-20250514",
        provider="anthropic",
        max_tokens=32000,
        temperature=1
    )
)

# Pass provider-specific parameters at message creation
response = await client.messages.create(
    messages=[{"role": "user", "content": "Solve this complex problem"}],
    # Provider-specific parameters:
    timeout=600,              # Anthropic: extended timeout
    thinking={                # Anthropic: thinking mode
        "type": "enabled",
        "budget_tokens": 10000
    }
)

Examples by Provider

Anthropic - Thinking mode, timeouts:

response = await client.messages.create(
    messages=[...],
    timeout=600,
    thinking={"type": "enabled", "budget_tokens": 10000},
    top_k=40
)

OpenAI - JSON mode, penalties:

response = await client.messages.create(
    messages=[...],
    response_format={"type": "json_object"},
    frequency_penalty=0.5,
    seed=12345
)

Google - Safety settings:

response = await client.messages.create(
    messages=[...],
    safety_settings=[{
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_ONLY_HIGH"
    }]
)

🔧 Function Calling

v-router provides unified function calling across all providers:

from pydantic import BaseModel, Field
from v_router import Client, LLM
from v_router.classes.tools import ToolCall, Tools

# Define tool schema
class WeatherQuery(BaseModel):
    location: str = Field(..., description="City and state, e.g. San Francisco, CA")
    units: str = Field("fahrenheit", description="Temperature units")

# Create tool
weather_tool = ToolCall(
    name="get_weather",
    description="Get current weather for a location",
    input_schema=WeatherQuery.model_json_schema()
)

# Configure LLM with tools
llm_config = LLM(
    model_name="claude-sonnet-4",
    provider="anthropic",
    tools=Tools(tools=[weather_tool])
)

client = Client(llm_config)

# Make request
response = await client.messages.create(
    messages=[{"role": "user", "content": "What's the weather in Paris?"}]
)

# Check for tool calls
if response.tool_use:
    for tool_call in response.tool_use:
        print(f"Tool: {tool_call.name}")
        print(f"Arguments: {tool_call.arguments}")

🖼️ Multimodal Support

Send images and PDFs seamlessly across providers:

from v_router import Client, LLM
from v_router.classes.message import TextContent, ImageContent, DocumentContent

# Create client
client = Client(
    llm_config=LLM(
        model_name="claude-sonnet-4",
        provider="anthropic"
    )
)

# Method 1: Send image by file path (automatic conversion)
response = await client.messages.create(
    messages=[
        {
            "role": "user",
            "content": "/path/to/image.jpg"  # Automatically converted to base64
        },
        {
            "role": "user", 
            "content": "What do you see in this image?"
        }
    ]
)

# Method 2: Send multimodal content with explicit types
import base64
with open("image.jpg", "rb") as f:
    image_data = base64.b64encode(f.read()).decode("utf-8")

response = await client.messages.create(
    messages=[
        {
            "role": "user",
            "content": [
                TextContent(text="Analyze this image:"),
                ImageContent(data=image_data, media_type="image/jpeg")
            ]
        }
    ]
)

print(f"Response: {response.content[0].text}")

🌐 Supported Providers

Provider Models Features
Anthropic Claude 3 (Opus, Sonnet, Haiku), Claude 4 (Opus, Sonnet) Function calling, Images, PDFs
OpenAI GPT-4, GPT-4 Turbo, GPT-4.1, GPT-3.5 Function calling, Images
Google Gemini Pro, Gemini 1.5 (Pro, Flash), Gemini 2.0 Flash Function calling, Images, PDFs
Azure OpenAI GPT-4, GPT-4 Turbo, GPT-4.1, GPT-3.5 Function calling, Images
Vertex AI Claude 3/4 & Gemini models via Google Cloud Function calling, Images, PDFs

⚙️ Configuration

Environment Variables

Set up authentication for your providers:

# Anthropic
export ANTHROPIC_API_KEY="your-key-here"

# OpenAI
export OPENAI_API_KEY="your-key-here"

# Google AI Studio
export GOOGLE_API_KEY="your-key-here"

# Google Cloud (for Vertex AI)
export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
export GCP_PROJECT_ID="your-project-id"
export GCP_LOCATION="us-central1"

# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-key-here"
export AZURE_OPENAI_ENDPOINT="your-endpoint"

Model Configuration

v-router uses models.yml to map model names across providers. You can use generic names that automatically map to provider-specific models:

# These all work automatically:
LLM(model_name="claude-sonnet-4", provider="anthropic")      # → claude-sonnet-4-20250514
LLM(model_name="claude-sonnet-4", provider="vertexai")       # → claude-sonnet-4@20250514
LLM(model_name="gpt-4", provider="openai")                   # → gpt-4
LLM(model_name="gemini-1.5-pro", provider="google")          # → gemini-1.5-pro-latest

📝 Response Format

All providers return the same unified response structure:

class Response:
    content: List[Content]          # Text content blocks
    tool_use: List[ToolUse]        # Function calls made
    usage: Usage                   # Token usage info  
    model: str                     # Actual model used
    provider: str                  # Provider used
    raw_response: Any              # Original provider response

📖 Documentation

Complete documentation is available online:

📚 Full Documentation

Quick Links

Getting Started

API Reference

Guides

Examples

Jupyter Notebooks

Explore the examples/ directory for interactive examples:

🗺️ Development Roadmap

  • Chat Completions: Unified interface across providers
  • Function Calling: Tool calling support
  • Multimodal Support: Images, PDFs, and document processing
  • Streaming: Real-time response streaming
  • AWS Bedrock: Additional provider support
  • JSON Mode: Structured output generation
  • Prompt Caching: Optimization for repeated prompts
  • Ollama Support: Local model integration

🛠️ Development

Setup

# Install with development dependencies
uv sync --all-extras

# Install pre-commit hooks
uv run pre-commit install

Testing

# Run all tests
uv run pytest

# Run specific test file
uv run pytest tests/models/test_llm.py

# Run with verbose output
uv run pytest -v

Code Quality

# Check code style
uv run ruff check .

# Auto-fix issues
uv run ruff check --fix .

# Format code
uv run ruff format .

🏗️ Architecture

v-router follows a clean provider pattern:

  • Client: Main entry point with unified API
  • Router: Handles request routing and fallback logic
  • Providers: Individual provider implementations inheriting from BaseProvider
  • Models: Unified request/response models

Adding a New Provider

  1. Create provider class in src/v_router/providers/
  2. Inherit from BaseProvider
  3. Implement create_message() and name property
  4. Add to PROVIDER_REGISTRY in router.py
  5. Update models.yml with supported models

📄 License

This project is licensed under the MIT License.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📞 Support


v-router - Making LLM integration simple, reliable, and unified across all providers.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

v_router-0.0.22.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

v_router-0.0.22-py3-none-any.whl (25.8 kB view details)

Uploaded Python 3

File details

Details for the file v_router-0.0.22.tar.gz.

File metadata

  • Download URL: v_router-0.0.22.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.7.12

File hashes

Hashes for v_router-0.0.22.tar.gz
Algorithm Hash digest
SHA256 eadd5521e012a1a0dc1acedd410aab365398c7e31274637f72c9ebe723472592
MD5 27f8fa5efd912f77dfc28d652cac008a
BLAKE2b-256 5fbb511b4a9278e94bcf2999a9f2f1b5705946e62776427b8330c78c3a8d244c

See more details on using hashes here.

File details

Details for the file v_router-0.0.22-py3-none-any.whl.

File metadata

  • Download URL: v_router-0.0.22-py3-none-any.whl
  • Upload date:
  • Size: 25.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.7.12

File hashes

Hashes for v_router-0.0.22-py3-none-any.whl
Algorithm Hash digest
SHA256 6b5f1275303b59283305cbc978b1334db1bc8f6bd90b4ec8b4906d632f5ea117
MD5 19856f7b9e71cc97a2b26707174a8170
BLAKE2b-256 7f2de3d195f6ea10d0bac23bdda0f24567e110123fc982199ccc978cff8afbdc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page