OpenRouter-compatible LLM router with unified batch support. Route requests across OpenAI, Anthropic, and Google with a single API.
Project description
anymodel
OpenRouter-compatible LLM router with unified batch support for Python. Self-hosted, zero fees.
Route requests across OpenAI, Anthropic, and Google with a single API. Add any OpenAI-compatible provider. Run as an SDK or standalone HTTP server.
Install
pip install anymodel
Quick Start
Set your API keys as environment variables:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=AIza...
SDK Usage
import asyncio
from anymodel import AnyModel
async def main():
client = AnyModel()
response = await client.chat.completions.create(
model="anthropic/claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response["choices"][0]["message"]["content"])
asyncio.run(main())
Streaming
stream = await client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Write a haiku"}],
stream=True,
)
async for chunk in stream:
content = chunk["choices"][0].get("delta", {}).get("content", "")
print(content, end="", flush=True)
Supported Providers
Set the env var and go. Models are auto-discovered from each provider's API.
| Provider | Env Var | Example Model |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
openai/gpt-4o |
| Anthropic | ANTHROPIC_API_KEY |
anthropic/claude-sonnet-4-6 |
GOOGLE_API_KEY |
google/gemini-2.5-pro |
|
| Mistral | MISTRAL_API_KEY |
mistral/mistral-large-latest |
| Groq | GROQ_API_KEY |
groq/llama-3.3-70b-versatile |
| DeepSeek | DEEPSEEK_API_KEY |
deepseek/deepseek-chat |
| xAI | XAI_API_KEY |
xai/grok-3 |
| Together | TOGETHER_API_KEY |
together/meta-llama/Llama-3.3-70B-Instruct-Turbo |
| Fireworks | FIREWORKS_API_KEY |
fireworks/accounts/fireworks/models/llama-v3p3-70b-instruct |
| Perplexity | PERPLEXITY_API_KEY |
perplexity/sonar-pro |
| Ollama | OLLAMA_BASE_URL |
ollama/llama3.3 |
Flex Pricing (OpenAI)
Get 50% off OpenAI requests with flexible latency:
response = await client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
service_tier="flex",
)
Fallback Routing
Try multiple models in order. If one fails, the next is attempted:
response = await client.chat.completions.create(
model="",
models=[
"anthropic/claude-sonnet-4-6",
"openai/gpt-4o",
"google/gemini-2.5-pro",
],
route="fallback",
messages=[{"role": "user", "content": "Hello"}],
)
Tool Calling
response = await client.chat.completions.create(
model="anthropic/claude-sonnet-4-6",
messages=[{"role": "user", "content": "What's the weather in NYC?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
},
}],
tool_choice="auto",
)
for call in response["choices"][0]["message"].get("tool_calls", []):
print(call["function"]["name"], call["function"]["arguments"])
Batch Processing
Process many requests with native provider batch APIs or concurrent fallback. OpenAI, Anthropic, and Google batches are processed server-side — OpenAI at 50% cost, Anthropic with async processing for up to 10K requests, Google at 50% cost via batchGenerateContent. Other providers fall back to concurrent execution automatically.
Submit and wait
results = await client.batches.create_and_poll({
"model": "openai/gpt-4o-mini",
"requests": [
{"custom_id": "req-1", "messages": [{"role": "user", "content": "Summarize AI"}]},
{"custom_id": "req-2", "messages": [{"role": "user", "content": "Summarize ML"}]},
],
})
for result in results["results"]:
print(result["custom_id"], result["response"]["choices"][0]["message"]["content"])
Submit now, check later
# Submit and get the batch ID
batch = await client.batches.create({
"model": "anthropic/claude-haiku-4-5",
"requests": [
{"custom_id": "req-1", "messages": [{"role": "user", "content": "Summarize AI"}]},
],
})
print(batch["id"]) # "batch-abc123"
# Check status any time
status = await client.batches.get("batch-abc123")
print(status["status"]) # "pending", "processing", "completed"
# Wait for results when ready
results = await client.batches.poll("batch-abc123")
# List all batches
all_batches = await client.batches.list()
# Cancel a batch
await client.batches.cancel("batch-abc123")
BatchBuilder API
An ergonomic interface for building batches — just pass strings, and anymodel handles IDs, system prompt injection, and provider-specific formatting:
batch = client.batches.open(
model="anthropic/claude-sonnet-4-6",
system="You are an expert.",
)
batch.add("What is an LLC?")
batch.add("How do I dissolve an LLC?")
await batch.submit()
results = await batch.poll()
print(results.succeeded) # successful responses with per-item costs
print(results.failed) # failed items
print(results.usage) # aggregate usage and estimated_cost
# Retry failed items
retry_batch = batch.retry(results.failed)
await retry_batch.submit()
retry_results = await retry_batch.poll()
Batch mode
Force concurrent execution instead of native batch APIs (useful when you want flex pricing on individual requests):
results = await client.batches.create_and_poll({
"model": "openai/gpt-4o",
"batch_mode": "concurrent", # skip native batch, run as individual requests
"requests": [
{"custom_id": "req-1", "messages": [{"role": "user", "content": "Hello"}]},
],
})
Service tier on batch requests
Use flex pricing on concurrent batches for 50% cost savings:
results = await client.batches.create_and_poll({
"model": "openai/gpt-4o",
"batch_mode": "concurrent",
"service_tier": "flex", # flex pricing on each concurrent request
"requests": [
{"custom_id": "req-1", "messages": [{"role": "user", "content": "Hello"}]},
],
})
Poll logging
Enable console logging during batch polling to monitor progress:
# Per-call option
results = await client.batches.create_and_poll(request, log_to_console=True)
# Or enable globally via environment variable
# ANYMODEL_BATCH_POLL_LOG=1
Automatic max_tokens
When max_tokens isn't set on a batch request, anymodel automatically calculates a safe value per-request based on the estimated input size and the model's context window. This prevents truncated responses and context overflow errors without requiring you to hand-tune each request in a large batch.
Concurrent batch requests are streamed from disk — only N requests (default 5) are in-flight at a time, making 10K+ request batches safe without memory spikes.
Batch configuration
client = AnyModel({
"batch": {
"poll_interval": 10.0, # default poll interval in seconds
"concurrency_fallback": 10, # concurrent request limit for non-native providers
},
"io": {
"read_concurrency": 30, # concurrent file reads (default: 20)
"write_concurrency": 15, # concurrent file writes (default: 10)
},
})
Generation Stats
response = await client.chat.completions.create(...)
stats = client.generation.get(response["id"])
print(stats["latency"], stats["tokens_prompt"], stats["tokens_completion"])
print(stats["total_cost"]) # auto-calculated from bundled pricing data
Auto Pricing / Cost Calculation
Pricing for 323 models is baked in at build time from OpenRouter — always current as of last publish. Costs are calculated automatically from token usage with no configuration needed.
# Per-request cost on GenerationStats
stats = client.generation.get(response["id"])
print(stats["total_cost"]) # e.g. 0.0023
# Batch-level cost on BatchUsageSummary
results = await client.batches.create_and_poll(request)
print(results["usage"]["estimated_cost"]) # total across all requests
# Native batch pricing is automatically 50% off
# Utility functions also exported
from anymodel import calculate_cost, get_model_pricing, PRICING_AS_OF
Configuration
client = AnyModel({
"anthropic": {"api_key": "sk-ant-..."},
"openai": {"api_key": "sk-..."},
"aliases": {
"default": "anthropic/claude-sonnet-4-6",
"fast": "anthropic/claude-haiku-4-5",
"smart": "anthropic/claude-opus-4-6",
},
"defaults": {
"temperature": 0.7,
"max_tokens": 4096,
"retries": 2,
"timeout": 120, # HTTP timeout in seconds (default: 120 = 2 min, flex: 600 = 10 min)
},
})
# Use aliases as model names
response = await client.chat.completions.create(
model="fast",
messages=[{"role": "user", "content": "Quick answer"}],
)
Config File
Create anymodel.config.json in your project root:
{
"anthropic": {
"api_key": "${ANTHROPIC_API_KEY}"
},
"aliases": {
"default": "anthropic/claude-sonnet-4-6"
},
"defaults": {
"temperature": 0.7,
"max_tokens": 4096
}
}
${ENV_VAR} references are interpolated from environment variables.
Custom Providers
Add any OpenAI-compatible endpoint:
client = AnyModel({
"custom": {
"ollama": {
"base_url": "http://localhost:11434/v1",
"models": ["llama3.3", "mistral"],
},
},
})
response = await client.chat.completions.create(
model="ollama/llama3.3",
messages=[{"role": "user", "content": "Hello from Ollama"}],
)
Server Mode
Run as a standalone HTTP server compatible with the OpenAI SDK:
pip install anymodel[server]
anymodel serve --port 4141
Then point any OpenAI-compatible client at it:
from openai import OpenAI
client = OpenAI(base_url="http://localhost:4141/api/v1", api_key="unused")
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello via server"}],
)
Also Available
- Node.js:
@probeo/anymodelon npm - Go:
anymodel-go
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file anymodel_py-0.5.1.tar.gz.
File metadata
- Download URL: anymodel_py-0.5.1.tar.gz
- Upload date:
- Size: 50.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5e52bc4d2faaf338152020aaafc6a771e61785914441805086a4755e3d55cd55
|
|
| MD5 |
f4c11f2a6fc497f642407e80cb96515f
|
|
| BLAKE2b-256 |
ec877c56bd7c411638a7e836206122fb801beed18e10baa94d77d290c77bc1c4
|
File details
Details for the file anymodel_py-0.5.1-py3-none-any.whl.
File metadata
- Download URL: anymodel_py-0.5.1-py3-none-any.whl
- Upload date:
- Size: 63.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44e57a727a91afacd2b2c9a63a007fa20e28ec73cab1c38b67057ee660fefee5
|
|
| MD5 |
9c79fb7ea0d82a7b2e1e987e1d579021
|
|
| BLAKE2b-256 |
802171247ffa230f6defa9606c3462d55cdd900fd7f359bdf867be00e6dc256a
|