Unified configuration management for LLM applications.
Project description
pai-llm-config
Unified configuration management for LLM applications.
One YAML file to manage all your LLM providers, models, API keys, and parameters. Works with OpenAI, Anthropic, Azure, LiteLLM, DSPy, LangChain, and more.
If this project helps you, please consider giving it a star. It helps others discover it too.
Features
- Multi-provider — OpenAI, Anthropic, Azure, LiteLLM, and any OpenAI-compatible endpoint (DeepSeek, Gemini, Ollama, vLLM, etc.)
- Two-layer adapters — L1 outputs plain dicts (zero extra deps), L2 returns real SDK clients with key rotation
- Model aliases — Reference models by semantic names (
smart,fast,cheap) instead ofgpt-4o - Multi-key pool — Automatic key rotation with priority / round_robin / least_used / random strategies
- Framework integration — One-step client creation for DSPy, LiteLLM; params output for LangChain, OpenAI SDK, etc.
- Streaming — Built-in streaming wrappers with automatic usage reporting (OpenAI + Anthropic, sync + async)
- Multi-environment — Profile-based config (dev / staging / prod) with inheritance
- Type-safe — Pydantic validation, full IDE autocompletion
Install
pip install pai-llm-config
# With optional SDK support
pip install pai-llm-config[openai] # OpenAI SDK
pip install pai-llm-config[anthropic] # Anthropic SDK
pip install pai-llm-config[litellm] # LiteLLM
pip install pai-llm-config[all] # Everything
Quick Start
1. Create llm-config.yaml in your project root:
version: "1"
providers:
openai:
type: openai
api_key: ${OPENAI_API_KEY}
models:
gpt-4o:
provider: openai
model: gpt-4o
temperature: 0.7
max_tokens: 4096
aliases:
smart: gpt-4o
2. Use it:
from pai_llm_config import config
# L2: One-line client creation with key rotation
client = config.openai_client("smart")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
Usage
Config Loading
from pai_llm_config import LLMConfig, config
# Global singleton (recommended) — auto-discovers llm-config.yaml
model = config.get("smart")
# Or use LLMConfig directly
cfg = LLMConfig.default() # Cached singleton
cfg = LLMConfig.load() # Fresh instance
cfg = LLMConfig.load(profile="production", config_path="config/llm.yaml")
L1: Parameter Output (Zero Extra Dependencies)
from pai_llm_config import config
# OpenAI SDK format
params = config.params("smart")
# -> {"model": "gpt-4o", "api_key": "sk-xxx", "base_url": "https://...", "temperature": 0.7, ...}
# LiteLLM format (provider/model prefix + api_base)
params = config.litellm_params("smart")
# -> {"model": "openai/gpt-4o", "api_key": "sk-xxx", "api_base": "https://...", ...}
# DSPy format
params = config.dspy_params("smart")
# -> {"model": "openai/gpt-4o", "api_key": "sk-xxx", "api_base": "https://...", ...}
L2: SDK Client Factory
from pai_llm_config import config
# Type-safe client creation with built-in key rotation
client = config.openai_client("smart") # -> openai.OpenAI
client = config.anthropic_client("reasoning") # -> anthropic.Anthropic
client = config.async_openai_client("smart") # -> openai.AsyncOpenAI
client = config.async_anthropic_client("reasoning") # -> anthropic.AsyncAnthropic
# Auto-dispatch by provider type
client = config.create_client("smart") # -> openai.OpenAI or anthropic.Anthropic
Framework Integration
from pai_llm_config import config
# DSPy — one step, returns configured dspy module
dspy = config.dspy_client("smart")
qa = dspy.ChainOfThought("question -> answer")
result = qa(question="What is pai-llm-config?")
# LiteLLM — returns litellm.Router
client = config.litellm_client("smart")
response = client.completion(model="smart", messages=[...])
# LangChain — use params() output
from langchain_openai import ChatOpenAI
chat = ChatOpenAI(**config.params("smart"))
Streaming
from pai_llm_config import config
# OpenAI streaming with automatic usage reporting
stream = config.stream_openai_chat("smart", messages=[{"role": "user", "content": "Tell a story"}])
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
# Anthropic streaming
with config.stream_anthropic_chat("reasoning", messages=[...], max_tokens=1024) as stream:
for text in stream.text_stream:
print(text, end="")
# Auto-dispatch
stream = config.stream_chat("smart", messages=[...])
Multi-Key Rotation
providers:
openai:
type: openai
api_keys:
- key: ${OPENAI_KEY_1}
alias: "primary"
priority: 1
daily_limit_usd: 5.0
- key: ${OPENAI_KEY_2}
alias: "secondary"
priority: 2
daily_limit_usd: 10.0
key_strategy: priority # priority | round_robin | least_used | random
# L2 clients automatically rotate keys — zero code changes
client = config.openai_client("smart")
# Monitor key pool health
pool = config.key_pool("openai")
print(pool.status())
Task Routing
routing:
presets:
code_generation: smart
summarization: cheap
classification: cheap
model = config.route("code_generation") # -> ModelConfig for "smart"
Configuration Reference
See docs/02_config-spec.md for the full YAML specification, and docs/06_examples.md for more usage examples.
Contributing
Contributions are welcome! Feel free to open issues or submit pull requests.
If you find this project useful, please give it a star on GitHub — it motivates continued development and helps others find this project.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pai_llm_config-0.1.4.tar.gz.
File metadata
- Download URL: pai_llm_config-0.1.4.tar.gz
- Upload date:
- Size: 44.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
80e4c404f2ae3850201d103eb981dbb6d142c333fec4088af2495f507c4a845c
|
|
| MD5 |
4cb793c5b424d5c4a00981a180c72b86
|
|
| BLAKE2b-256 |
b3e592a809354e0493798a01aabfa8909b958c1bc9205df42cb9f2ca9fa4598b
|
File details
Details for the file pai_llm_config-0.1.4-py3-none-any.whl.
File metadata
- Download URL: pai_llm_config-0.1.4-py3-none-any.whl
- Upload date:
- Size: 23.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
573a8aa41d3b3cb5c6f094266fa9c89b3998cf2fc343833bc228f3249599bf74
|
|
| MD5 |
769247f3f1a63c9db3c69d09ee7af689
|
|
| BLAKE2b-256 |
535f802c3e1fb124891d7be9924493c9efdd35adcb814cdae7ea484e65bd947e
|