Unified Python interface for multiple LLM providers with cost tracking
Project description
majordomo-llm
A unified Python interface for multiple LLM providers with automatic cost tracking, retry logic, and structured output support.
Features
- Unified API - Same interface for OpenAI, Anthropic (Claude), Google Gemini, DeepSeek, and Cohere
- Cost Tracking - Automatic calculation of input/output token costs per request
- Structured Outputs - Native support for Pydantic models as response schemas
- Automatic Retries - Built-in exponential backoff retry logic using tenacity
- Async First - Fully async/await compatible for high-performance applications
- Type Safe - Complete type annotations and
py.typedmarker for IDE support
Installation
pip install majordomo-llm
Or with uv:
uv add majordomo-llm
Quick Start
Basic Text Response
import asyncio
from majordomo_llm import get_llm_instance
async def main():
# Create an LLM instance
llm = get_llm_instance("anthropic", "claude-sonnet-4-20250514")
# Get a response
response = await llm.get_response(
user_prompt="What is the capital of France?",
system_prompt="You are a helpful geography assistant.",
)
print(response.content)
print(f"Tokens: {response.input_tokens} in, {response.output_tokens} out")
print(f"Cost: ${response.total_cost:.6f}")
asyncio.run(main())
JSON Response
response = await llm.get_json_response(
user_prompt="List the top 3 largest countries by area as JSON",
system_prompt="Respond with valid JSON only.",
)
# response.content is a parsed Python dict
for country in response.content["countries"]:
print(country["name"])
Structured Output with Pydantic
from pydantic import BaseModel
class CountryInfo(BaseModel):
name: str
capital: str
population: int
area_km2: float
response = await llm.get_structured_json_response(
response_model=CountryInfo,
user_prompt="Give me information about Japan",
)
# response.content is a validated CountryInfo instance
country = response.content
print(f"{country.name}: {country.capital}, pop. {country.population:,}")
Configuration
Environment Variables
Set API keys for the providers you want to use:
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic (Claude)
export ANTHROPIC_API_KEY="sk-ant-..."
# Google Gemini
export GEMINI_API_KEY="..."
# DeepSeek
export DEEPSEEK_API_KEY="sk-..."
# Cohere
export CO_API_KEY="..."
Available Models
OpenAI
gpt-5,gpt-5-mini,gpt-5-nanogpt-4o,gpt-4.1,gpt-4.1-mini,gpt-4.1-nano
Anthropic
claude-sonnet-4-5-20250929,claude-opus-4-1-20250805claude-opus-4-20250514,claude-sonnet-4-20250514claude-3-7-sonnet-latest,claude-3-5-haiku-latest
Gemini
gemini-2.5-flash,gemini-2.5-flash-litegemini-2.0-flash,gemini-2.0-flash-lite
DeepSeek
deepseek-chat,deepseek-reasoner
Cohere
command-a-03-2025,command-r-plus-08-2024command-r-08-2024,command-r7b-12-2024
API Reference
Factory Functions
get_llm_instance(provider: str, model: str) -> LLM
Create an LLM instance for the specified provider and model.
from majordomo_llm import get_llm_instance
llm = get_llm_instance("openai", "gpt-4o")
LLM Methods
All LLM instances support these async methods:
get_response(user_prompt, system_prompt=None, temperature=0.3, top_p=1.0) -> LLMResponse
Get a plain text response.
get_json_response(user_prompt, system_prompt=None, temperature=0.3, top_p=1.0) -> LLMJSONResponse
Get a JSON response (automatically parsed).
get_structured_json_response(response_model, user_prompt, system_prompt=None, temperature=0.3, top_p=1.0) -> LLMStructuredResponse
Get a response validated against a Pydantic model.
Response Objects
All response objects include usage metrics:
| Field | Type | Description |
|---|---|---|
content |
str / dict / BaseModel |
The response content |
input_tokens |
int |
Number of input tokens |
output_tokens |
int |
Number of output tokens |
cached_tokens |
int |
Number of cached tokens (if applicable) |
input_cost |
float |
Cost for input tokens (USD) |
output_cost |
float |
Cost for output tokens (USD) |
total_cost |
float |
Total cost (USD) |
response_time |
float |
Response time in seconds |
Advanced Usage
Automatic Fallback with LLMCascade
Use LLMCascade for automatic failover between providers:
from majordomo_llm import LLMCascade
# Providers are tried in order - first is primary, rest are fallbacks
cascade = LLMCascade([
("anthropic", "claude-sonnet-4-20250514"), # Primary
("openai", "gpt-4o"), # First fallback
("gemini", "gemini-2.5-flash"), # Last resort
])
# If Anthropic fails, automatically tries OpenAI, then Gemini
response = await cascade.get_response("Hello!")
All three response methods (get_response, get_json_response, get_structured_json_response) support automatic fallback.
Direct Provider Access
You can also instantiate providers directly for more control:
from majordomo_llm import Anthropic
llm = Anthropic(
model="claude-sonnet-4-20250514",
input_cost=3.0, # per million tokens
output_cost=15.0, # per million tokens
)
Web Search (Anthropic)
Enable web search for supported Claude models:
from majordomo_llm.providers.anthropic import Anthropic
llm = Anthropic(
model="claude-sonnet-4-5-20250929",
input_cost=3.0,
output_cost=15.0,
use_web_search=True,
)
Development
Setup
git clone https://github.com/superset-studio/majordomo-llm.git
cd majordomo-llm
uv sync --all-extras
Running Tests
uv run pytest
Type Checking
uv run mypy src/majordomo_llm
Linting
uv run ruff check src/majordomo_llm
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file majordomo_llm-0.1.2.tar.gz.
File metadata
- Download URL: majordomo_llm-0.1.2.tar.gz
- Upload date:
- Size: 80.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c9ad376507eef9c4e596bff6ca223680fde122235c37fc481f0c859b9aafccbf
|
|
| MD5 |
69c058102f24ccbd838e50f70add0bd5
|
|
| BLAKE2b-256 |
c6f9e91fb24eee3c3ece5ad1d57948f2292f498bcff7696a9f3a2c356ab006ae
|
File details
Details for the file majordomo_llm-0.1.2-py3-none-any.whl.
File metadata
- Download URL: majordomo_llm-0.1.2-py3-none-any.whl
- Upload date:
- Size: 24.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
06ac27ac83c37720a8dfd8d59a7c13e5102e6e417ca93b6593da44cd15062f16
|
|
| MD5 |
1608415131b3b6a7fbad32ac0f387b3b
|
|
| BLAKE2b-256 |
a8da49d52dd0c2c539af0a1e6c6dd5d507568aa7cc5bb6901fa6f6186a8d9eba
|